Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Combination system of computing chip and memory chip based on high-speed serial channel interconnection

A memory chip and computing chip technology, applied in computing, computers, digital computer components, etc., can solve the problems of difficult interface implementation, low granularity, and high signal integrity requirements, so as to realize data sharing and improve memory access bandwidth , the effect of reducing cost

Pending Publication Date: 2019-01-04
深圳市安信智控科技有限公司
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

First, the bandwidth improvement of these storage technologies is limited. They use a multi-bit parallel interface bus. The main way to further increase the bandwidth is to use a wider interface bus or increase the interface rate, but the multi-bit parallel bonded transmission requires signal integrity. It is becoming more and more difficult to implement higher, wider and higher-speed interfaces; if the main processor wants to further increase the memory access bandwidth, it must integrate more memory access interfaces, which are limited by the chip size and the number of pins, and the wide bus type It is difficult to greatly increase the number of integrated memory access interfaces
Second, the implementation cost of new storage technology is relatively high. For example, the engineering cost of advanced HBM technology is as high as tens of millions of dollars.
The third is that the above-mentioned new storage technologies do not have a sharing mode, or the sharing granularity is very low. For example, DDR4 / DDR5, GDDR5, and HBM storage media can only be accessed by the main control chip directly connected to them, and cannot realize multiple storage. Direct shared access of multiple main control chips; although HMC can connect multiple main control chips, it does not support the shared use of more than four main control chips
The weak shared use characteristics of the above-mentioned various new storage technologies make the cost of adopting new storage higher to a certain extent

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Combination system of computing chip and memory chip based on high-speed serial channel interconnection
  • Combination system of computing chip and memory chip based on high-speed serial channel interconnection
  • Combination system of computing chip and memory chip based on high-speed serial channel interconnection

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0022] Such as figure 2 As shown, a combination system of computing chips and memory chips based on high-speed serial channel interconnection includes 1 computing chip and x memory chipsets, x is a natural number, and each memory chipset contains several memory chips. The chip is the main control chip, and the computing chip is connected to each memory chipset through at least one high-speed serial channel, or more than one high-speed serial channel can be used to connect each memory chipset according to application requirements. When the chip is running, the data is distributed among all the memory chips, and all the memory chips are exclusively used by one computing chip. In this embodiment, the memory chips in the memory chipset are cascaded through high-speed serial channels.

Embodiment 2

[0024] Such as image 3 As shown, a combination system of computing chips and memory chips based on high-speed serial channel interconnection includes n computing chips and m memory chipsets, each memory chipset contains several memory chips, and the memory chipsets in the memory chipset The memory chips are cascaded through high-speed serial channels, and n and m are natural numbers. The computing chips are divided into a group of computing chipsets, and each computing chip is connected to each memory chipset through several high-speed serial channels. That is, each computing chip can access all memory chipsets, and these m memory chips are shared by n computing chips. This system structure can not only realize data sharing among n computing chips very simply, but also reduce the It reduces the usage cost overhead of memory chips in the system.

Embodiment 3

[0026] Such as figure 1 As shown, a computing chip and memory chip combination system based on high-speed serial channel interconnection includes several computing chips and several memory chipsets, the memory chipset contains several memory chips, and the memory chips in the memory chipset Through high-speed serial channel cascading, based on the system structure of algorithm data flow graph mapping, computing chips are divided into h groups of computing chipsets, h is a natural number, and each computing chipset contains several computing chips, such as the first group of computing The chipset contains a1 computing chips, which are respectively denoted as c1.1, c1.2, ..., c1.a1, and the second group of computing chips contains a2 computing chips, which are respectively denoted as c2.1, c2.2, ..., c2.a2, and so on, the computing chipset of the hth group contains ah computing chips, which are respectively recorded as ch.1, ch.2, ..., ch.ah. There are several memory chipsets b...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the field of computer system structure and integrated circuit design, a system for combining a computing chip and a memory chip base on high speed serial channel interconnection is disclosed, includes a plurality of computing chips and a plurality of memory chipsets, a memory chipset includes a plurality of memory chips, the system architecture based on the algorithm dataflow graph mapping divides the computational chip into n groups of computational chip sets, each computing chipset comprises a plurality of computing chips, and a plurality of memory chipsets are arranged between the computing chipsets requiring data sharing and exchange. The computing chips in the computing chipset are connected with all adjacent memory chipsets through a plurality of high-speedserial channels. The computing chips in the computing chipsets are connected with all adjacent memory chipsets through a plurality of high-speed serial channels. A system for combine computing chip and memory chip base on high-speed serial channel interconnection can flexibly increase that number of high-speed serial channels to increase the memory access bandwidth accord to requirements, the overall system structure is flexible in design, effectively adapts to the scene of computing inter-chip streaming data processing, and the storage cost is low.

Description

technical field [0001] The invention relates to the field of computer system structure and integrated circuit design, in particular to a combination system of computing chips and memory chips based on high-speed serial channel interconnection. Background technique [0002] Among various types of algorithms, a large number of algorithms are memory-intensive algorithms, that is, memory access operations account for a higher proportion in the algorithm execution process, and the memory access performance largely determines the runtime performance of the algorithm. Especially for algorithms with irregular memory access patterns, that is, algorithms with poor locality of memory access, Cache (high-speed cache) cannot effectively accelerate the algorithm execution process. In this case, memory access bandwidth and latency play a decisive role in the runtime performance of the algorithm. [0003] At present, in order to improve the performance of the storage system, the industry h...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F15/173G06F13/40
CPCG06F13/4022G06F15/17381G06F2213/0002
Inventor 童元满陆洪毅刘垚童乔凌
Owner 深圳市安信智控科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products