An arithmetic compression and decompression method of fpga configuration file based on neural network model

A neural network model and configuration file technology, applied in the field of arithmetic compression and decompression of FPGA configuration files based on neural network model, can solve the problem of long time-consuming configuration process, achieve the effect of improving compression rate, reducing coding length and improving accuracy

Active Publication Date: 2021-10-08
XI AN JIAOTONG UNIV
View PDF9 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The technical problem to be solved by the present invention is to provide a neural network model-based FPGA configuration file arithmetic compression and decompression method to improve the compression rate of the configuration file, thereby effectively solving the problem that the configuration process takes too long question

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An arithmetic compression and decompression method of fpga configuration file based on neural network model
  • An arithmetic compression and decompression method of fpga configuration file based on neural network model
  • An arithmetic compression and decompression method of fpga configuration file based on neural network model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0104] Take the v5_crossbar.bit test data implemented by the Xilinx Virtex-V development board in the standard test set of the Department of Computer Science at the University of Erlangen-Nuremberg in Germany as an example. The configuration file is 8179 bytes, specifically:

[0105] S1, adopt the FPGA configuration file compression strategy of arithmetic coding, define symbol as bit (bit), therefore, Ds={0,1}, S N It is the binary code stream of the configuration file, and k is set to 64;

[0106] S2. Use the neural network model to estimate the probability of each symbol in the FPGA configuration file. Since the value of the symbol is correlated with the value of the previous symbol, an LSTM layer is used to construct the neural network model. The model consists of 2 layers of LSTM layers and a layer of fully connected layers. The model structure is as follows figure 1 As shown, among them, LSTM layer 1 and LSTM layer 2 each have 128 neurons; the fully connected layer has 2...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an FPGA configuration file arithmetic compression and decompression method based on a neural network model, defines the content sequence of the FPGA configuration file, and uses the conditional probability distribution of any symbol determined by its first k items of data as the corresponding symbol in the arithmetic coding process The probability of each symbol in the FPGA configuration file is estimated by the neural network model; arithmetic coding is used to compress the probability of each symbol in the predicted FPGA configuration file using the established neural network; the FPGA configuration file is decompressed. The invention uses a neural network model to estimate the probability of configuration file sequence data, and uses the estimation result to compress and decompress the FPGA configuration file, which solves the problem that the FPGA configuration process takes too long.

Description

technical field [0001] The invention belongs to the field of electronic technology, and in particular relates to an FPGA configuration file arithmetic compression and decompression method based on a neural network model. Background technique [0002] Field Programmable Gate Array (Field Programmable Gate Array, FPGA) has received more and more attention and recognition in the field of neural network hardware acceleration in recent years because of its high performance and high flexibility, especially in the field of real-time requirements. It has a wide range of applications in areas such as high-level automobile automatic driving, stock high-frequency trading, and Internet of Things computing. [0003] Artificial intelligence and deep learning are developing rapidly. In order to cope with increasingly complex neural network models, the integration level of FPGA chips is constantly increasing, and the number of available resources on the chip is constantly increasing. While ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): H03M7/40G06N3/04G06N3/08
CPCH03M7/40G06N3/08G06N3/047G06N3/044G06N3/045
Inventor 伍卫国康益菲王今雨冯雅琦赵东方
Owner XI AN JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products