FPGA configuration file arithmetic compression and decompression method based on neural network model

A neural network model and configuration file technology, which is applied in the field of arithmetic compression and decompression of FPGA configuration files based on neural network models, can solve the problem of long time-consuming configuration process, and achieve the effect of solving time-consuming and improving accuracy

Active Publication Date: 2020-07-17
XI AN JIAOTONG UNIV
View PDF9 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The technical problem to be solved by the present invention is to provide a neural network model-based FPGA configuration file arithmetic compression and decompression method to improve the compression rate of the configuration file, thereby effectively solving the problem that the configuration process takes too long question

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • FPGA configuration file arithmetic compression and decompression method based on neural network model
  • FPGA configuration file arithmetic compression and decompression method based on neural network model
  • FPGA configuration file arithmetic compression and decompression method based on neural network model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0104] Take the v5_crossbar.bit test data implemented by the Xilinx Virtex-V development board in the standard test set of the Department of Computer Science at the University of Erlangen-Nuremberg in Germany as an example. The configuration file is 8179 bytes, specifically:

[0105] S1, adopt the FPGA configuration file compression strategy of arithmetic coding, define symbol as bit (bit), therefore, Ds={0,1}, S N It is the binary code stream of the configuration file, and k is set to 64;

[0106] S2. Use the neural network model to estimate the probability of each symbol in the FPGA configuration file. Since the value of the symbol is correlated with the value of the previous symbol, an LSTM layer is used to construct the neural network model. The model consists of 2 layers of LSTM layers and a layer of fully connected layers. The model structure is as follows figure 1 As shown, among them, LSTM layer 1 and LSTM layer 2 each have 128 neurons; the fully connected layer has 2...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an FPGA configuration file arithmetic compression and decompression method based on a neural network model, and the method comprises the steps: defining a content sequence of an FPGA configuration file, and enabling the conditional probability distribution of any symbol under the determination of the first k items of data to serve as the probability of a corresponding symbol in an arithmetic coding process; estimating the probability of each symbol in the FPGA configuration file by using the neural network model; using arithmetic coding and performing arithmetic codingcompression on the probability of each symbol of the predicted FPGA configuration file by using the established neural network; and decompressing the FPGA configuration file. According to the method,the probability estimation of the configuration file sequence data is carried out by using the neural network model, and the FPGA configuration file is compressed and decompressed by using the estimation result, so that the problem that the FPGA configuration process consumes too long time is solved.

Description

technical field [0001] The invention belongs to the field of electronic technology, and in particular relates to an FPGA configuration file arithmetic compression and decompression method based on a neural network model. Background technique [0002] Field Programmable Gate Array (Field Programmable Gate Array, FPGA) has received more and more attention and recognition in the field of neural network hardware acceleration in recent years because of its high performance and high flexibility, especially in the field of real-time requirements. It has a wide range of applications in areas such as high-level automobile automatic driving, stock high-frequency trading, and Internet of Things computing. [0003] Artificial intelligence and deep learning are developing rapidly. In order to cope with increasingly complex neural network models, the integration level of FPGA chips is constantly increasing, and the number of available resources on the chip is constantly increasing. While ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H03M7/40G06N3/04G06N3/08
CPCH03M7/40G06N3/08G06N3/047G06N3/044G06N3/045
Inventor 伍卫国康益菲王今雨冯雅琦赵东方
Owner XI AN JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products