Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Large-scale multi-operation floating point matrix calculation acceleration implementation method and device

A technology of floating-point matrix and implementation method, which is applied in computing, complex mathematical operations, instruments, etc., can solve problems such as insufficient structural reusability, multiple logic resources, and increased area, and achieve flexible use, high computing efficiency, and resource The effect of less consumption

Pending Publication Date: 2022-03-22
NAT UNIV OF DEFENSE TECH +1
View PDF1 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the disadvantages of this type of technology are: each module will consume more logic resources, the area will increase, and the reusability of the structure is relatively insufficient.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Large-scale multi-operation floating point matrix calculation acceleration implementation method and device
  • Large-scale multi-operation floating point matrix calculation acceleration implementation method and device
  • Large-scale multi-operation floating point matrix calculation acceleration implementation method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0037] The present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0038] Such as figure 1 As shown, the large-scale multi-operation floating-point matrix calculation acceleration realization method of the present invention, its steps include:

[0039] Step S1: According to the operation type of the matrix to be processed, receive an external input signal and judge the matrix operation mode: when the operation mode is matrix addition or matrix subtraction, go to step S3; when the operation mode is matrix multiplication, matrix-vector multiplication, During matrix-scalar multiplication, proceed to step S2;

[0040] Step S2: Initialize the on-chip RAM to be zero, and proceed to step S4;

[0041] Step S3: Load the data source C into the on-chip RAM through the RAM channel, and transfer to step S4;

[0042] Step S4: Preload part of data stream A through the RAM channel, and then load data stream A and...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a large-scale multi-operation floating point matrix calculation acceleration implementation method, which comprises the following steps: S1, receiving an external input signal and judging a matrix operation mode according to an operation type of a to-be-processed matrix: when the operation mode is matrix addition and matrix subtraction, turning to execute a step S3, and when the operation mode is matrix subtraction, turning to execute a step S4; when the operation mode is matrix multiplication, matrix-vector multiplication and matrix-scalar multiplication, turning to execute the step S2; s2, initializing an on-chip RAM (Random Access Memory) to be zero, and turning to execute a step S4; s3, the data source C is loaded into the on-chip RAM through the RAM channel, and the step S4 is executed; s4, pre-loading a part of the data stream A through an RAM channel, and loading the data stream A and the data stream B while calculating; s5, after calculation is completed, a calculation result is transmitted to the off-chip memory. The device is used for implementing the method. The method has the advantages of low storage requirement, high calculation efficiency, high reusability, wide application range and the like.

Description

technical field [0001] The invention mainly relates to the technical field of high-performance computers, in particular to a large-scale multi-operation floating-point matrix calculation acceleration method and device. Background technique [0002] In many sciences and engineering, matrix calculation is a basic and widely used operation mode, such as digital image storage, processing and recognition, neural network calculation, Kalman filter in control system, etc. Matrix computing directly affects the performance of high-performance computers. [0003] At present, platforms such as CPU and GPGPU use software libraries such as MKL and cuBLAS to accelerate matrix calculations, but these methods are limited by energy consumption and complex levels, and the application effect is not good in mobile systems or embedded systems. [0004] In terms of FPGA (Field-programmable gate arrays, Field Programmable Gate Arrays) development, there are some relevant researches on dense matri...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F17/16G06F7/483
CPCG06F17/16G06F7/483
Inventor 彭元喜张龙龙郭阳扈啸黄啊慧粟毅张世亮田甜李岩
Owner NAT UNIV OF DEFENSE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products