Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Computing architecture

A computing architecture and computing array technology, applied in the field of computing architecture, can solve problems such as frequent CacheMiss, low Cache utilization, and restricting computing performance, so as to reduce performance bottlenecks, reduce CacheMiss, and improve flexibility.

Active Publication Date: 2020-08-11
XI AN JIAOTONG UNIV
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Specifically, when this type of computing library handles large-scale equation system solving and matrix operations, it will inevitably have the problems of frequent Cache Miss and low computing efficiency.
At this time, the extremely low Cache utilization and limited memory bandwidth become the main bottleneck restricting performance, seriously restricting the overall computing performance

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Computing architecture
  • Computing architecture
  • Computing architecture

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] In one embodiment, such as figure 1 As shown, a computing architecture is disclosed, including: off-chip memory, on-chip cache unit, transmitting unit, pre-reorganization network, post-reorganization network, main computing array, data dependency controller and global scheduler; wherein,

[0034] An off-chip memory for storing all large-scale data in a block format, wherein the large-scale data is divided into multiple blocks of equal size;

[0035] The on-chip cache unit is used to store part of the data of the block to be calculated and the dependent data required for the calculation;

[0036] The transmitting unit is used to read the data of the corresponding block from the on-chip cache unit and send it to the pre-reassembly network according to the order specified by the scheduling algorithm;

[0037] The main calculation array is used to complete the calculation of the data of the main block;

[0038] Pre-reorganization network, which is used to perform arbitrar...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A computing architecture comprises an off-chip memory, an on-chip cache unit, a pre-fetching unit, a global scheduler, a transmitting unit, a pre-reorganization network, a post-reorganization network,a main computing array, a write-back cache unit, a data dependence controller and an auxiliary computing array. According to the architecture, data blocks are read into an on-chip cache in a pre-fetching mode, and calculation is carried out according to the data blocks; in the calculation process of the blocks, a block switching network is adopted to recombine a data structure, and a data dependence module is arranged to process possible data dependence relationships between different blocks. According to the computing architecture, the data utilization rate can be increased, and the data processing flexibility is improved, so that the Cache Miss is reduced, and the memory bandwidth pressure is reduced.

Description

technical field [0001] The disclosure belongs to the technical field of processing large-scale data, and in particular relates to a computing architecture. Background technique [0002] Solving large-scale linear equations and matrix operations is one of the most critical operations in modern scientific computing and engineering computing. At present, such operations mainly rely on high-performance linear algebra libraries, such as CUBLAS on GPU platforms, and computing libraries such as Linear Algebra Package (LAPACK) and Intel Math Kernel Library (MKL) on CPU platforms. This type of computing library generally adopts matrix inversion and equation group solving algorithms based on LU decomposition, and uses the Single Instruction Multiple Data (SIMD) style of high-parallel computing units to achieve maximum parallelization of data processing. . However, for large-scale problems, the computing data cannot be completely stored in the on-chip cache (such as multi-level cache...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F15/78
CPCG06F15/7867G06F15/781G06F9/3887G06F12/0813G06F12/0862G06F2212/454G06F12/0207G06F2212/1024G06F2212/1048G06F12/0879G06F12/0804G06F9/3555G06F9/3838G06F2212/1021
Inventor 夏天任鹏举赵浩然李泽华赵文哲郑南宁
Owner XI AN JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products