Unlock instant, AI-driven research and patent intelligence for your innovation.

A method of calling data based on fpga off-chip memory

An off-chip memory and data call technology, applied in data conversion, electrical digital data processing, instruments, etc., can solve problems such as low read and write efficiency and inability to meet data call efficiency requirements.

Active Publication Date: 2020-09-29
BEIJING INSTITUTE OF TECHNOLOGYGY
View PDF11 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The data in the off-chip memory of the FPGA is stored in banks. The main idea of ​​the current method is to store the different input feature map data in the convolutional neural network in different Banks of the off-chip memory of the FPGA as much as possible. However, this method requires frequent Jumping addresses to access off-chip memory, the reading and writing efficiency is low, especially when encountering large-scale convolutional neural network calculations, it is even more unable to meet the efficiency requirements of data calls

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A method of calling data based on fpga off-chip memory
  • A method of calling data based on fpga off-chip memory
  • A method of calling data based on fpga off-chip memory

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0024] see figure 1 , which is a flow chart of a method for calling data based on an FPGA off-chip memory provided in this embodiment. A data call method based on FPGA off-chip memory, applied to convolutional neural network, especially the process of extracting data from a feature map with a size of S×S by using a sliding window in the calculation of convolutional neural network, including the following step:

[0025] S1: Set up a fifo group in the FPGA on-chip memory, where the fifo group includes L fifos (first in put first output, first in first output queue), and then number each fifo from 1 to L in sequence, and determine that it needs to be output outside the fifo group at the same time The number M of the fifo of the data, specifically:

[0026] L=2×kernel+Stride×(N-2) (1)

[0027] M=kernel+Stride×(N-1) (2)

[0028] Among them, kernel is the preset convolution kernel size, Stride is the step size of the sliding window used in the convolution calculation, and N is t...

Embodiment 2

[0037] Based on the above embodiment, in this embodiment, the size of the feature map is 15×15, the size of the convolution kernel is 3×3, the step size of the sliding window during the feature map convolution calculation is 1 and the convolution calculation unit on the FPGA chip The number is 2, that is, the number of sliding windows that needs to be processed simultaneously is N=2. As an example, a method for calling an FPGA off-chip memory is described in detail.

[0038] Step 1. Determine the number L of fifos in the fifo group

[0039] Each fifo group is determined according to three parameters: the size of the convolution kernel (kernel), the step size (Stride) of the sliding window during the feature map convolution calculation, and the number of convolution calculation units on the FPGA chip (N). The number L of fifo in , satisfies the following formula:

[0040] L=2×3+1×(2-2)=6

[0041] That is, there are 6 fifos in the fifo group.

[0042] Step 2. Determine the numb...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a data calling method based on an FPGA off-chip memory. The method comprises the following steps of storing the data of the feature map in a fifo line by line according to a sequence; during each read-write operation, enabling the first M fifo to output a currently stored first data, writing the first currently stored data back to the tail of the first data which is stored by the fifo and corresponds to the serial number which is smaller than the serial number of the first data by L-M by the M fifo; meanwhile, writing the first data of the L + 1th row of the feature mapinto the tail part of the data stored in the L-1 fifo, writing the first data of the L + 2 row of the feature map into the tail of the data stored in the L fifo, so that when the fifo continuously outputs the data out of the fifo group in sequence, writing the rest data of the feature map into the fifo group in sequence for reading until the data traversal of the whole feature map iscompleted; Therefore, the data of the FPGA off-chip memory are not directly called, complex address hopping is avoided, and the efficiency of calling the data of the FPGA off-chip memory is greatly improved.

Description

technical field [0001] The invention belongs to the technical field of image classification and recognition, and in particular relates to a method for calling data based on an FPGA off-chip memory. Background technique [0002] In the past five years, convolutional neural networks have achieved good results in the fields of image feature extraction, classification and recognition. Due to the flexible and changeable convolutional neural network architecture, the current convolutional neural network is mainly realized by software platforms such as CPU and GPU. However, in current engineering applications, the demand for system real-time performance and low power consumption is becoming more and more prominent. Therefore, the use of hardware platforms to accelerate the calculation of convolutional neural networks and achieve the purpose of reducing system power consumption has become increasingly popular in convolutional neural networks. Research hotspots of network in enginee...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06N3/063G06F5/06
Inventor 龙腾魏鑫陈禾陈磊陈亮
Owner BEIJING INSTITUTE OF TECHNOLOGYGY