Unlock instant, AI-driven research and patent intelligence for your innovation.

Pipelined computing acceleration co-processing method and system

A pipelined, co-processing technology, applied in computing, electrical digital data processing, program control design, etc., can solve the problems of high computing efficiency, difficult to obtain, waste of resources, etc., to reduce computing time slots, improve throughput, and improve work. The effect of efficiency

Active Publication Date: 2021-01-29
GUANGDONG COMM & NETWORKS INST
View PDF4 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this architecture has the following defects: every calculation needs to wake up the DMA to update the operands of the calculation acceleration unit, resulting in a long time slot for each calculation, the calculation acceleration unit is idle during the DMA, waste of resources, and it is difficult to obtain high performance. computing efficiency

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Pipelined computing acceleration co-processing method and system
  • Pipelined computing acceleration co-processing method and system
  • Pipelined computing acceleration co-processing method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0033] see image 3 , image 3 It is a schematic flowchart of a pipelined computing acceleration co-processing method disclosed in an embodiment of the present invention. Wherein, the pipelined computing acceleration co-processing can be applied in a computing acceleration co-processing system, and the system includes multiple computing units respectively used to perform operations of different levels. The embodiment of the present invention does not limit the computing acceleration co-processing system. Such as image 3 As shown, the pipelined calculation acceleration co-processing method may include the following operations:

[0034] 101. Receive a plurality of operation groups to be calculated, and analyze the operation groups to generate the number of operations to be performed and the operands of each level of operation.

[0035]Among them, for the sake of easy understanding, the implementation form of the calculation unit is CAU0, CAU1...CAUn pipelined structure. com...

Embodiment 2

[0051] see Figure 5 , Figure 5 It is a schematic flowchart of another pipelined computing acceleration co-processing method disclosed in the embodiment of the present invention. Wherein, the pipelined computing acceleration co-processing can be applied in a computing acceleration co-processing system, and the system includes multiple computing units respectively used to perform operations of different levels. The embodiment of the present invention does not limit the computing acceleration co-processing system. Such as Figure 5 As shown, the pipelined calculation acceleration co-processing method may include the following operations:

[0052] In this embodiment, the operation groups to be calculated include: the first operation group is K1=(A27(A26(...)0), the second operation group is K2=(B20(B19(...)0), and the second operation group is K2=(B20(B19(...)0). The three operation groups are K3=(C22(C21(...)0).

[0053] Firstly, the number of operations of K1, K2, and K3 a...

Embodiment 3

[0062] see Figure 7 , Figure 7 It is a schematic diagram of a pipelined computing acceleration co-processing system disclosed in an embodiment of the present invention. Such as Figure 7 As shown, the pipelined computing acceleration co-processing system may include:

[0063] A plurality of calculation units 1 are respectively used to perform calculations of different levels.

[0064] The operand management unit 2 is configured to receive a plurality of operation groups to be calculated, analyze and generate the number of operations to be performed and the operands of each level of operation for the operation groups.

[0065] The implementation form of the calculation unit is CAU0, CAU1...CAUn pipelined structure. combine Figure 4 An implementation manner of the shown pipelined computing acceleration co-processing method is described in detail. The implementation of this pipeline computing acceleration co-processing method includes an operand management unit (operands...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a pipelined computing acceleration co-processing method. The method is applied to a computing acceleration co-processing system. The system comprises a plurality of computing units which are respectively used for executing operations of different levels. A plurality of operation groups to be computed are received; and the operation groups are analyzed, so that to-be-operated times and operands of the operation of each level are generated; operand preparation is performed on the operand of each level of operation through a data path, and a calculation identifier of an association level is configured; a plurality of calculation units of different levels are allocated to each operation group according to the to-be-operated times, and the operation times needing to be executed are allocated to the plurality of calculation units of different levels respectively; and each level of calculation unit is executed according to the calculation identifier and the operation times needing to be executed by the calculation unit until the calculation is finished, so that a final calculation result is to generated and outputted. The invention further discloses a pipelined computing acceleration co-processing system. According to the method and the system disclosed by the invention, the calculation throughput and the calculation time slot can be greatly improved, and the efficiency of the calculation units is improved.

Description

technical field [0001] The present invention relates to the technical field of calculation acceleration, in particular to a pipelined calculation acceleration co-processing method and system. Background technique [0002] The current digital age continues to develop rapidly, and the requirements for data processing capabilities continue to increase. Technologies such as computer big data, cloud computing, artificial intelligence, the Internet of Things, and autonomous driving continue to develop. The application of these technologies requires timely processing of massive amounts of data. ability to support. The algorithms involved include various calculations, especially large-scale and regular repeated calculations such as continuous iteration, training, and approximation. Usually, these regular and large-scale repetitive calculations use application-specific integrated circuits (ASICs) or field-programmable gate arrays (FPGAs) for accelerated computing processing. [000...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/38G06F13/32
CPCG06F9/3869G06F13/32
Inventor 张又文智扬刘玉佳廖述京朱晓明
Owner GUANGDONG COMM & NETWORKS INST
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More