Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

120 results about "Fpga acceleration" patented technology

The Intel FPGA Acceleration Stack. The Acceleration Stack for Intel Xeon CPU with FPGAs is a robust collection of software, firmware, and tools designed and distributed by Intel to make it easier to develop and deploy Intel FPGAs for workload optimization in the data center.

FPGA accelerator of LSTM neural network and acceleration method of FPGA accelerator

The invention provides an FPGA accelerator of an LSTM neural network and an acceleration method of the FPGA accelerator. The accelerator comprises a data distribution unit, an operation unit, a control unit and a storage unit; the operation unit comprises a sparse matrix vector multiplication module, a nonlinear activation function module and an element-by-element multiplication and addition calculation module; the control unit sends a control signal to the data distribution unit, and the data distribution unit reads an input excitation value and a neural network weight parameter from the storage unit and inputs the input excitation value and the neural network weight parameter to the operation unit for operation. The operation resources are uniformly distributed to each operation unit according to the number of the non-zero weight values, so that idling of operation resources is avoided, and the operation performance of the whole network is improved. Meanwhile, the pruned neural network is stored in the form of the sparse network, the weight value of each column is stored in the same address space, the neural network is coded according to the row index, and the operation performance and the data throughput rate are improved under the condition that the precision is guaranteed.
Owner:NANJING UNIV

FPGA-based network function acceleration method and system

The invention relates to an FPGA-based network function acceleration method and system. The method comprises the steps of establishing the network function acceleration system. The system comprises aphysical machine and an acceleration card. The physical machine is connected with the acceleration card through a PCIe channel. The physical machine comprises a processor. The acceleration card comprises an FPGA and is used for providing network function acceleration for the processor. The processor is configured to query whether a required acceleration module exists in the FPGA or not when the acceleration card is needed to provide the network function acceleration, if yes, obtain an acceleration function ID corresponding to the required acceleration module, if not, select at least one partial reconfiguration region in the FPGA, configure the region as the required acceleration module and generate the corresponding acceleration function ID, and/or send an acceleration request to the FPGA,wherein the acceleration request comprises a to-be-processed data package and the acceleration function ID. The FPGA is configured to send the acceleration request to the required acceleration modulefor performing acceleration processing according to the acceleration function ID.
Owner:HUAZHONG UNIV OF SCI & TECH

Firmware updating method, device and medium of FPGA accelerator card

InactiveCN107656776AAvoid the problem of low manual update efficiencyImprove usabilityProgram loading/initiatingHigh availabilityComplex network
The invention discloses a firmware updating method, device and medium of an FPGA accelerator card. The method comprises the steps that a configuration list is read in a script execution mode so that adevice address where the FPGA accelerator card to be updated is located can be acquired; updating firmware of the FPGA accelerator card is acquired, and the FPGA accelerator card is found through thedevice address so that the updating firmware can be burnt into the FPGA accelerator card; the updating firmware is loaded so that the FPGA accelerator card can be updated. Compared with an artificialmode that the updating firmware is burnt to the FPDG accelerator card one by one, the firmware updating method achieves automatic corresponding updating of the FPGA accelerator card by a system through script running, so that the problem of relatively low artificial updating efficiency due to a large quantity of FPGA accelerator cards in a complex network device environment is solved, the error probability due to difference among updating firmware of numerous FPGA accelerator cards in artificial updating is lowered, and high availability of the FPGA accelerator cards is guaranteed. Furthermore, the invention further provides the firmware updating device and medium of the FPGA accelerator card. The firmware updating device and medium of the FPGA accelerator card also have the advantages.
Owner:ZHENGZHOU YUNHAI INFORMATION TECH CO LTD

Hadoop heterogeneity method and system based on storage and acceleration optimization

The invention discloses a Hadoop heterogeneity method and system based on storage and acceleration optimization, and belongs to the field of distributed calculation. According to the technical scheme, aiming at data processing requirements, a storage medium is divided into three types, including a solid storage medium, a common storage medium and a high-density storage medium, and the most appropriate storage mode is found for data of different types; meanwhile, an acceleration application of which calculation performance needs to be improved is positioned to an FPGA accelerator with a specific algorithm or a GPU accelerator to complete calculation so as to improve processing performance of the application, and algorithm functions and layouts of the FPGA and GPU accelerator can be statically switched. The invention further discloses the Hadoop heterogeneity system based on the storage and acceleration optimization. According to the Hadoop heterogeneity method and system based on the storage and acceleration optimization, reading and writing performance of a whole cluster and execution performance of an application task and the resource utilization rate of an acceleration device are improved.
Owner:HUAZHONG UNIV OF SCI & TECH

Convolutional neural network accelerator based on CPU-FPGA memory sharing

The invention discloses a convolutional neural network accelerator based on CPU-FPGA memory sharing. A CPU processing subsystem comprises an input control module, a configuration parameter generationmodule and an output control module; the input control module receives and caches the pixel data and the weight data; the configuration parameter generation module controls configuration parameters; the output control module controls data transmission; the FPGA acceleration subsystem comprises an on-chip storage module, a calculation engine module and a control module; the on-chip storage module is used for buffering, reading and writing access of data; the calculation engine module accelerates calculation; and the control module controls the on-chip storage module to perform read-write operation on the data, and performs data exchange and calculation control with the calculation engine module. According to the method, the characteristics of high parallelism, high throughput rate and low power consumption of the FPGA can be brought into full play, and meanwhile, the flexible and efficient data processing characteristics of the CPU processor can be fully utilized, so that the whole system can efficiently and quickly realize the reasoning process of the convolutional neural network with relatively low power consumption.
Owner:BEIHANG UNIV

Intrusion prevention system and method

The invention discloses an intrusion prevention system and method. The intrusion prevention system comprises a data packet capture module, a data pack analysis module, a matching filter module, an FPGA (field programmable gate array) acceleration platform and a feature learning module; the data packet capture module is responsible for capturing and storing data packets entering a host; the data packet analysis module is used for analyzing and reorganizing the data packets captured by the data packet capture module; the matching filter module is used for matching and filtering the data packetscaptured in a matching filter through a matching filter algorithm; the FPGA acceleration platform is used for accelerating data by the aid of an FPGA calculation system and execution speed in algorithm of a packet classification module, the matching filter module and a neural training module; the feature learning module is used for performing neutral training on the data subjected to matching filter by the aid of a neutral network algorithm embedded in the FPGA acceleration platform. By the above principle, high calculation capability is achieved, intrusion behaviors can be detected in time before occurrence of the intrusion behaviors, misinformation and false report cannot happen, prevention effect is good, and the intrusion prevention system and method is quite suitable for intrusion prevention of big data.
Owner:INFORMATION & TELECOMM COMPANY SICHUAN ELECTRIC POWER

Whole genome sequencing data calculation interpretation method

The invention discloses a whole genome sequencing data calculation interpretation method, which comprises the following implementation steps that: inputting reference whole genome data used for whole genome sequencing and organic sequencing sample data, and carrying out preprocessing; calling an FPGA (Field Programmable Gate Array) by a CPU (Central Processing Unit) to be accelerated to compare sequencing sample reliable data with the reference whole genome data with an index to obtain a comparison result with a repeated sign and the index; calling the FPGA and a GPU (Graphics Processing Unit) by the CPU to be accelerated to carry out genome reassembling on the sequencing sample reliable data, and carrying out variant identification recognition on the comparison result with the repeated label and the index; and calling the GPU and a DSP (Digital Signal Processor) by the CPU to be accelerated to carry out visual processing, and calling a deep learning model realized by hardware on the FPGA by the CPU to carry out the analysis and the mining of the whole genome and a variant function on the basis of a visual processing result. By use of the method, the GPU, DSP and FPGA processors can be comprehensively utilized for acceleration, and the method has the advantages of being quick, real-time, accurate in penetration, popular and easy in understanding and varied in forms.
Owner:GENETALKS BIO TECH CHANGSHA CO LTD

Data processing method and device of heterogeneous computing platform and readable storage medium

The invention discloses a data processing method and device of a heterogeneous computing platform and a computer readable storage medium. The method comprises the steps that a data storage area and adata processing result storage area are opened up in a host memory space in advance, and a to-be-processed data storage area and a calculation result storage area are opened up in an FPGA accelerationboard card memory space; after the host stores the to-be-calculated data in the data storage area, a data processing request is issued to the FPGA acceleration board card, and the FPGA acceleration board card actively reads the to-be-calculated data from the data storage area and stores the to-be-calculated data in the to-be-processed data storage area of the FPGA acceleration board card; a corresponding data processing algorithm is called to perform data calculation on the to-be-calculated data in the to-be-processed data storage area to obtain a data calculation result, and the data calculation result is stored in a calculation result storage area; and finally, the data calculation result back is actively writted to the data processing result storage area of the host. The data transmission efficiency of the heterogeneous computing platform is improved, and the computing performance of the FPGA acceleration board card is improved.
Owner:INSPUR BEIJING ELECTRONICS INFORMATION IND

FPGA cloud platform acceleration resource allocation method and system

ActiveCN110618871AImprove experienceProtect effective rights and interestsResource allocationResource poolDistribution method
The invention provides an FPGA cloud platform acceleration resource allocation method and system. According to the invention, accelerator card resources are allocated and coordinated according to thetime delay between a user host and a FPGA accelerator card deployed in each network segment; when the user applies for using the FPGA, the FPGA acceleration card with the minimum delay with the host in the FPGA resource pool is allocated to the user, thereby realizing allocation of acceleration resources of the FPGA cloud platform; a cloud monitoring management platform can obtain the transmissiondelay with a virtual machine network according to different geographic positions of the FPGA board cards in the FPGA resource pool, , and can allocate the board card with the minimum delay to each user for use; in addition, unauthorized users can be effectively prevented from randomly accessing the acceleration resources in the resource pool, and the effective rights and interests of a resource pool owner are protected. According to the method and the device, the FPGA acceleration card which is not authorized to be used by the user is effectively protected, the network delay of the board cardallocated to the user can be ensured to be minimum, the optimal acceleration effect is achieved, and the user experience is improved.
Owner:INSPUR SUZHOU INTELLIGENT TECH CO LTD

Data acceleration operation processing method and device and computer readable storage medium

The invention discloses a data acceleration operation processing method and device and a computer readable storage medium. The method comprises the following steps that: a storage server applies for an input cache space and an output cache space in advance by utilizing a memory management module, and meanwhile, transmits the initial address of each memory page of the input cache space and the output cache space to a direct memory access descriptor table, wherein the input cache space stores IO original data corresponding to a user data request , and the output cache space stores a data processing result obtained after the IO original data is subjected to accelerated operation; an FPGA acceleration card receives a data acceleration operation request and transfers the IO original data to thelocal from the input cache space through the direct memory access descriptor table to subject the IO original data to accelerated operation processing, and transfers a data processing result to the output cache space through the direct memory access descriptor table; and therefore, memory copying can be avoided while the FPGA acceleration card is used for carrying out data processing on the storage server, and the performance loss of the storage server in the acceleration process is effectively reduced.
Owner:INSPUR SUZHOU INTELLIGENT TECH CO LTD

Method and device for power supply control of FPGA accelerator card auxiliary power supply, and medium

The invention discloses a method and a device for power supply control of a FPGA accelerator card auxiliary power supply, and a medium. The steps of the method comprises: establishing a first power supply access between an auxiliary power interface and a FPGA, and a second power supply access between an auxiliary power interface and a preset component; obtaining current operation power consumptionof the FPGA, and using a preset control standard as basis, obtaining a power supply group state which meets the operation power consumption; according to the power supply group state, controlling on-off of each power supply in the power supply group, to supply power for the FPGA using the auxiliary power interface; determining whether the state of the main power interface supplying power to the preset component is a preset state, if yes, using the auxiliary power interface to supply power for the preset component. An auxiliary power supply is relatively reasonably and flexibly used, and safety operation of each component and integrated working efficiency are guaranteed. In addition, the invention also provides a device for power supply control of a FPGA accelerator card auxiliary power supply, and a medium, and beneficial effects are as described above.
Owner:ZHENGZHOU YUNHAI INFORMATION TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products