Method and system for implementing a stream processing computer architecture

A computer and stream computing technology, applied in the direction of computers, digital computer components, computing, etc., can solve the problems of increasing the access time of big data sets, increasing the transmission bandwidth, etc.

Active Publication Date: 2011-07-27
IBM CORP
View PDF0 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, building large scalable stream processing systems suffers from various drawbacks, such as the challenge

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and system for implementing a stream processing computer architecture
  • Method and system for implementing a stream processing computer architecture
  • Method and system for implementing a stream processing computer architecture

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0018] An exemplary embodiment according to the present invention discloses an interconnected stream processing architecture for a stream computer system and a processing procedure for implementing the interconnected architecture. The interconnection architecture consists of two network types that complement each other's functionality and address connectivity between tightly coupled groups of processing nodes. Such groups or clusters can be locally interconnected using a variety of protocols and both static and dynamic network topologies (eg, 2D / 3D grids, hierarchical fully connected components, switch-based components). Network and switch functionality can be incorporated within the processor chips so that clusters can be obtained by interconnecting the processor chips directly to each other without external switches. An example of such a technology and protocol is HyperTransport3 (HT3). Packaging constraints, transfer signal speeds, and allowable distances of interconnects ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The present disclosure relates to a method for implementing a stream processing computer architecture, including creating a stream computer processing (SCP) system by forming a super node cluster of processors representing physical computation nodes (''nodes''), communicatively coupling the processors via a local interconnection means (''interconnect''), and communicatively coupling the cluster to an optical circuit switch (OCS), via optical external links (''links''). The OCS is communicatively coupled to another cluster of processors via the links. The method also includes generating a stream computation graph including kernels and data streams, and mapping the graph to the SCP system, which includes assigning the kernels to the clusters and respective nodes, assigning data stream traffic between the kernels to the interconnection when the data stream is between nodes in the same cluster, and assigning traffic between the kernels to the links when the data stream is between nodes in different clusters. The method also includes configuring the OCSs to provide connectivity between mapped clusters.

Description

technical field [0001] The present invention relates to a data processing system, and more particularly, to a method and system for implementing a stream processing computer architecture. Background technique [0002] The impact of communications on computer system performance continues to grow both at the macro level (eg, blade servers and computer clusters) and at the micro level (eg, within a single processor chip with many cores). Traditional approaches to computing, which rely on shortening access times to main memory via cache hierarchies, are reaching a point of diminishing returns. This is so in part because of the increasing latency of I / O data transfers relative to the speed of the processing cores and the increasing portion of the (limited) on-chip power dissipation budget required for cache memory and global communication lines. At the same time, strict on-chip power dissipation constraints have caused many major semiconductor companies to move to multi-core or ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F15/173G06F9/50
CPCG06F9/5083G06F15/17343G06F9/5061Y02D10/00G06F9/46G06F9/06G06F9/38
Inventor E·申菲尔德T·B·史密斯三世
Owner IBM CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products