Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

70 results about "Block ram" patented technology

Block RAMs (or BRAM) stands for Block Random Access Memory. Block RAMs are used for storing large amounts of data inside of your FPGA. They one of four commonly identified components on an FPGA datasheet.

Integrated logic analysis module based on PCIe (peripheral component interconnection express) for FPGA (field programmable gate array)

ActiveCN102495920ASolve the situation where the margin is not enoughMeet the transmission protocol requirementsSpecial data processing applicationsInternal memoryComputer architecture
An integrated logic analysis module based on PCIe (peripheral component interconnection express) for an FPGA (field programmable gate array), which comprises a trigger controller, a DMA (direct memory access) controller, a message transmitting engine, a message receiving engine and a PCIe receiving and transmitting controller. The integrated logic analysis module not only can realize all functions of SignalTap or Chipscope, but also can solve the problem of insufficient allowance of a Block RAM (random access memory) in a large-size design, and at the moment, as data are exported and stored in an internal memory of a CPU (central processing unit) side instead of being stored in a chip, sufficient data can be collected under permission of the internal memory. In addition, a trigger module is a register-level code, accordingly, more complicated triggering setting can be realized only by means of modifying the code, and the integrated logic analysis module is far more flexible than the SignalTap or the Chipscope. Besides, in the large-sized design, a CPU and an FPGA are commonly arranged in the same system, a PCIe link is a common channel of multiple high-speed systems, and accordingly the integrated logic analysis module is wide in application.
Owner:南京中新赛克科技有限责任公司

Realization method for QC-LDPC (Quasi-Cyclic Low-Density Parity-Check) decoder for improving node processing parallelism

ActiveCN103220003AOvercoming the bottleneck of low throughput limiting system rateImprove throughputError correction/detection using multiple parity bitsRandom access memoryBlock ram
The invention relates to a realization method for a QC-LDPC (Quasi-Cyclic Low-Density Parity-Check) decoder for improving the node processing parallelism. The decoder comprises a variable node information updating unit (VNU), a variable node information packing unit (VP), a check node information updating unit (CNU), a check node information packing unit (CP), a check equation computing unit (PCU), storage blocks RAM (Random Access Memory)_f and RAM_m with storage bit widths of (Qh)bits respectively and a storage block RAM_c with the storage bit width of hbits. According to the realization method disclosed by the invention, due to the adoption of the node information packing unit, simultaneous reading and writing of batch data of a memory can be effectively realized and the problem of access conflict of the memory is solved; and the number of data stored in each address unit of the memory is added, so that the parallelism of a processing unit of the LDPC decoder can be improved. The realization method for the QC-LDPC decoder has the characteristics of high throughput capacity, fewer hardware resources, low design complicity and the like.
Owner:XIAN INSTITUE OF SPACE RADIO TECH

Parallel write-in multi-FIFO (first in,first out) implementation method based on single chip block RAM (random access memory)

The invention discloses a parallel write-in multi-FIFO (first in,first out) implementation method based on a single chip block RAM (random access memory). The implementation method comprises the following steps of: instantiating a block RAM into a DPRAM (dual port random access memory) to be used for storing data of each channel FIFO, wherein each channel FIFO has a corresponding memory space in the DPRAM; receiving each channel FIFO parallel data write-in requirement by an input buffer area and a write-in control logic, and writing data to the corresponding input buffer area of each channel FIFO; generating an inner writing order by the input buffer area and the write-in control logic after the input buffer areas receive the data, taking out each channel FIFO data from the input buffer areas; and sequentially writing in the memory space of each channel FIFO; when receiving external any channel FIFO reading requirement, reading the control logic according to the requirement, reading data from the memory space of the channel FIFO, and sending FIFO data to an output port; and setting the marks for marking the empty, full, programmable empty and programmable full states of each channel FIFO for logic. The implementation method provided by the invention can implement a plurality of FIFOs requiring parallel write-in and readout in a random sequence.
Owner:FUSHUN OPTOELECTRONICS TECH

NVMe SSD reading speed and optical fiber interface speed adaptive matching method

The invention discloses an NVMe SSD reading speed and optical fiber interface speed adaptive matching method in the technical field of data storage. In order to solve the problem that an NVMe SSD reading speed control method occupies more data cache resources in an FPGA in the prior art, the method comprises the following steps that firstly, the FPGA receives a data packet of reading data returnedfrom an NVMe SSD, and then an RxRead signal is pulled down by five clock periods. The invention controls the sending speed of a data packet when the NVMe SSD reads the data by using a RxReady signalof an AXI-Stream data receiving interface on a PCIe hard core, so that the NVMe SSD data reading speed matches the optical data interface speed. A complete reading command does not need to be split into a plurality of sub-commands. Enough time sequence allowance is reserved for the data packet receiving and analyzing process and development is easy. In addition, the method can reduce the requirement for cache resources in the data reading process, 50% of Block Ram cache resources and 92% of Block Ram cache resources are saved when the size of a logic block of the NVMe SSD is 512 Byte and 4 KByte respectively, and the method can be widely applied to the technical field of data storage.
Owner:HARBIN INST OF TECH

Device and method for quickly implementing LZ77 compression based on FPGA

The invention relates to a device and a method for quickly implementing LZ77 compression based on an FPGA. A data cache module in a compression device stores original data in a compression sliding window and implements the function by using a block RAM resource in the FPGA; a HASH linked list module realizes the construction and storage of a HASH dictionary and implements the function by using a block RAM + logic resource in the FPGA; an LZ77 encoding mode implements the search and encoding of the same character string. In a compression process, a loop cache equal to the size of a compressionwindow is constructed to store a HASH conflict linked list, and the conflict linked list is stored according to the sequence of data to be compressed, and a window removal operation is replaced by a loop coverage mode; when the same character string is searched by using the HASH linked list, the matched character string is simultaneously searched from two directions in the process of searching forthe same character string through the HASH linked list by using the parallel processing advantages of the FPGA, and meanwhile a matching value the same as a HASH value but having different charactersis removed in advance in a preprocessing manner during compressed encoding, thereby achieving the purpose of quickly eliminating data redundancy to achieve LZ77 data compression.
Owner:WUHAN ZHONGYUAN HUADIAN SCI & TECH

Network data management method based on one-way parallel multiple chain lists and system thereof

The invention discloses a network data management method based on one-way parallel multiple chain lists. The method includes the following steps: S100, receiving link flow copied by a light split device, analyzing the link flow packet by packet, and obtaining a quintuple unit of each packet; recording arrival time of each packet; S200, hashing each quintuple unit to a flow identification (ID); judging whether the current packets corresponding to the quintuple units are synthesizer (SYN) packets; S300, reading out tablebody data of the current packets from SD Ram and Block Ram; comparing the tablebody data with the quintuple units corresponding to the packets to confirm whether the packets corresponding to the quintuple units are matched with the tablebody. The network data management method based on the one-way parallel multiple chain lists and the system of the network data management method based on the one-way parallel multiple chain lists achieve packet-by-packet treatment in the flow management through a simple algorithm, effectively control inactivity timeout flow, and is high in algorithm efficiency, short in time consumption, low to cost due to the fact that field programmable gata array (FPGA) is adopted to process the core, and convenient to popularize and use.
Owner:大连环宇移动科技有限公司 +1

Circuit structure for implementing alternative of messages

The invention discloses a circuit structure for implementing alternative of messages. The circuit structure comprises an alternative judgment logic module, a judgment waiting buffer module, a current SEQID (SEQuence IDentifier) buffer module, and a scanning logic module, wherein the judgment waiting buffer module and the current SEQID buffer module are connected with the alternative judgment logic module; the scanning logic module is connected with the judgment waiting buffer module; an inlet of the alternative judgment logic module is connected with two selecting and receiving channels; and messages of two selecting and receiving channels are from dual-transmitter messages of the same source equipment. The circuit structure for implementing alternative of the messages, which is disclosed by the invention, adopts the independent alternative judgment logic module and judgment waiting buffer module, so that functions of the respective modules are relatively single, and the circuit structure is convenient to implement a logic circuit; and moreover, the judgment waiting buffer module is managed in an RAM (SDP BLOCK RAM) multiplexing mode, so that a great quantity of judgment waiting buffer module control resources are saved, and a message group number of an alternative circuit can support a range to 1K (1024) and even more.
Owner:STATE GRID ZHEJIANG ELECTRIC POWER CO LTD SHAOXING POWER SUPPLY CO +3

Full-pipeline SMS4 encryption and decryption method and system

The invention discloses a full-pipeline SMS4 encryption and decryption method and system. The method is characterized by using full-pipeline processing architecture based on a Block RAM; connecting all operational circuits for carrying out the n key generation in a one-time encryption and decryption conversion process, and n encryption and decryption completely in series; caching multi-clock-period intermediate operation data through the Block RAM, and storing an S box lookup table and a fixed parameter lookup table, wherein the intermediate operation data comprises a ciphertext, an intermediate key, a round key and an encryption and decryption trigger; realizing plaintext and ciphertext continuous input processing through a full-pipeline processing architecture; and realizing the multi-clock period delay output of the intermediate operation data through the read-write time difference of the dual-port Block RAM. An SMS4 algorithm is realized by using the pipeline architecture, FPGA embedded Block RAM resources are fully utilized to realize multi-clock-period caching of a large amount of intermediate operation data and S-box lookup table storage in a plurality of round key generation units and encryption and decryption units, the resource consumption of configurable logic blocks is reduced to the maximum extent, and the information throughput rate is improved to the maximum extent.
Owner:SICHUAN JIUZHOU ELECTRIC GROUP

SRAM (static random access memory) type FPGA SEU (field programmable gate array single event upset) operation fixing method

The invention discloses an SRAM (static random access memory) type FPGA SEU (field programmable gate array single event upset) operation fixing method, belonging to the technical field of configuration of SRAM type FPGAs. The method comprises the following steps: 1) after an SRAM type FPGA is electrified, a configuration management FPGA is used for carrying out whole configuration on the SRAM type FPGA; 2) the configuration management FPGA reads a configuration bit stream file, detects the bit stream file and carrying out the follow steps: when a frame write-in command FDRI in the bit stream file is detected, an incidental frame word number in the FDRI is replaced with a logic and interconnection frame word number; when content configuration dada Block RAM is detected, the Block RAM is replaced with a no-operation command; when a register resetting command GRESTORE is detected, the GRESTORE is replaced with a no-operation command; when I / O start command STARTUP is detected, the STARTUP is replaced with a no-operation command; 3) the bit stream file which is subjected to the process of the step 2) is written into the SRAM type FPGA; and 4) the step 2) and the step 3) are repeated until the SRAM type FPGA stops working. The method is suitable for SRAM type FPGA SEU operation fixation.
Owner:NO 513 INST THE FIFTH INST OF CHINA AEROSPACE SCI & TECH

Three-dimensional FFT calculation device based on FPGA

ActiveCN110647719ASolve the situation where large-scale 3D FFT cannot be calculatedMeet actual needsComplex mathematical operationsICT adaptationExternal storageParallel computing
The invention discloses a three-dimensional FFT calculation device based on an FPGA. The three-dimensional FFT calculation device mainly solves the problem that in the prior art, FFT calculation cannot be conducted on large-size three-dimensional data, wherein the FPGA comprises a one-dimensional FFT calculator, a two-dimensional FFT calculator, a data buffer area and an external memory, the databuffer area is formed by using an internal block RAM of the FPGA, and the external memory caches a result calculated by the two-dimensional FFT calculator; the three-dimensional FFT calculator calculates the FFT of the third dimension data according to the calculation result of the two-dimensional FFT cached by the external memory to obtain a three-dimensional FFT calculation result; and the datatransmission control module generates address information to control an intermediate result obtained by calculating the two-dimensional FFT to be cached to an external memory through an AXI4 bus, andcontrols the FPGA to read data from the external memory according to a page sequence. The method is high in calculation precision, the calculated three-dimensional FFT result meets the actual requirement, and the method can be used for simulating dynamic sea surface electromagnetic scattering.
Owner:XIDIAN UNIV +1

High-speed data caching structure and method

PendingCN111338983ALarge capacityControls are simple and straightforwardMemory systemsData controlControl cell
The invention discloses a high-speed data caching structure and method. The high-speed data caching structure comprises a front-end receiving data caching unit, a middle-end large-capacity data caching unit, a rear-end sending data caching unit and a caching data control unit, wherein the front-end receiving data caching unit comprises a double-port Block RAM, an A port of the BRAM of the front-end receiving data caching unit is used for writing data to be cached, and a B port of the BRAM of the front-end receiving data caching unit is used for reading the data to be cached to the middle-end large-capacity data caching unit; the middle-end large-capacity data cache unit comprises a high-speed cache chip DDR3, time-sharing ping-pong operation is adopted for executing write-in and read-out,and it is guaranteed that the write priority is higher than that of read in the process; the rear-end sending data cache unit comprises a dual-port Block RAM, an A port of the BRAM of the rear-end sending data cache unit is used for writing data read by DDR, and a B port of the BRAM of the rear-end sending data cache unit is used for reading data to a next operation end; and the cache control unitis used for controlling the execution processes of the three units. According to the method, the program execution efficiency can be improved and the bit error rate can be reduced in a system with high capacity and relatively high real-time requirement.
Owner:SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products