Traffic management architecture

a technology of traffic management and architecture, applied in the field of traffic management, can solve the problems of inability to achieve, complex routing schemes, sheer volume of traffic management schemes,

Inactive Publication Date: 2005-11-03
RAMBUS INC
View PDF52 Cites 28 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

[0026] The sorting means preferably comprises a parallel processor, such as an array processor, more preferably a SIMD processor.
[0027] There may be further means to provide access for the parallel processors to shared state. A state engine may control access to the shared state.

Problems solved by technology

The problem that modern traffic management schemes have to contend with is the sheer volume.
However, more complex schemes are required in routes which provide Traffic Management.
This cannot be achieved if all traffic is buffered in the same FIFO queue.
In reality, system realisation is confounded by some difficult implementation issues: High line speeds can cause large packet backlogs to rapidly develop during brief congestion events.
This demands high data read and write bandwidth into memory.
The processing overhead of some scheduling and congestion avoidance algorithms is high.
Priority queue ordering for some (FQ) scheduling algorithms is a non-trivial problem at high speeds.
For the exceptionally large numbers of input queues (of the order 64 k) required for per-flow traffic handling, the first stage becomes unmanageably wide to a point that it becomes impractical to implement the required number of schedulers.
Alternatively, in systems which aggregate all traffic into a small number of queues parallelism between hardware schedulers cannot be exploited.
It then becomes extremely difficult to implement a single scheduler—even in optimised hardware—that can meet the required performance point.
The queue first, think later strategy often fails and data simply has to be jettisoned.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Traffic management architecture
  • Traffic management architecture
  • Traffic management architecture

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] The present invention turns current thinking on its head. FIG. 2 shows schematically the basic structure underlying the new strategy for effective traffic management. It could be described as a “think first, queue later™” strategy.

[0035] Packet data (traffic) received at the input 20 has the header portions stripped off and record portions of fixed length generated therefrom, containing information about the data, so that the record portions and the data portions can be handled separately. Thus, the data portions take the lower path and are stored in Memory Hub 21. At this stage, no attempt is made to organise the data portions in any particular order. However, the record portions are passed to a processor 22, such as a SIMD parallel processor, comprising one or more arrays of processor elements (PEs). Typically, each PE contains its own processor unit, local memory and register(s).

[0036] In contrast to the prior architecture outlined in FIG. 1, the present architecture sha...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

An architecture for sorting incoming data packets in real time, on the fly, processes the packets and places them into an exit order queue before storing the packets. This is in contrast to the traditional way of storing first then sorting later and provides rapid processing capability. A processor generates packet records from an input stream and determines an exit order number for the related packet. The records are stored in an orderlist manager while the data portions are stored in a memory hub for later retrieval in the exit order stored in the manager. The processor is preferably a parallel processor array using SIMD and is provided with rapid access to a shared state by a state engine.

Description

FIELD OF THE INVENTION [0001] The present invention concerns the management of traffic, such as data and communications traffic, and provides an architecture for a traffic manager that surpasses known traffic management schemes in terms of speed, efficiency and reliability. BACKGROUND TO THE INVENTION [0002] The problem that modern traffic management schemes have to contend with is the sheer volume. Data arrives at a traffic handler from multiple sources at unknown rates and volumes and has to be received, sorted and passed on “on the fly” to the next items of handling downstream. Received data may be associated with a number of attributes by which priority allocation, for example, is applied to individual data packets or streams, depending on the class of service offered to an individual client. Some traffic may therefore have to be queued whilst later arriving but higher priority traffic is processed. A router's switch fabric can deliver packets from multiple ingress ports to one ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(United States)
IPC IPC(8): H04L12/54H04L47/32H04L47/56
CPCH04L12/5693H04L47/2441H04L47/32H04L47/562H04L49/9042H04L47/6215H04L47/624H04L49/90H04L47/60H04L47/50G06F9/466G06F15/80
Inventor SPENCER, ANTHONY
Owner RAMBUS INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products