FPGA-based neural network accelerator supporting channel separation convolution
A neural network and channel separation technology, applied in the field of neural network accelerator hardware structure, can solve the problems of performance degradation, reduction of calculation amount and storage cost, reduction of time and space utilization, etc., to reduce access and improve energy efficiency.
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Benefits of technology
Problems solved by technology
Method used
Image
Examples
Embodiment Construction
[0038] The technical solutions and beneficial effects of the present invention will be described in detail below in conjunction with the accompanying drawings.
[0039] Such as figure 1 As shown, the hardware structure of the convolutional neural network accelerator designed for the present invention, taking the four convolution types shown in Table 1 as examples, describes its working methods in detail.
[0040] The external control processor first writes relevant parameters such as the size of the input feature value of this layer, the number of channels, padding, and convolution calculation methods (full connection, channel separation convolution, and traditional convolution) and on-chip network data flow configuration information through the configuration bus. into accelerator-related registers. Secondly, control the DMA to write the input feature value and the weight value into the corresponding input buffer sub-area and the weight buffer sub-area in the ORMU unit respec...
PUM
Login to View More Abstract
Description
Claims
Application Information
Login to View More 


