Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Network processor load balancing and scheduling method based on residual task processing time compensation

A network processor and load balancing technology, applied in data exchange networks, digital transmission systems, electrical components, etc., to solve problems such as coarse data flow division, unbalanced data flow size distribution, and difficulty in achieving load balance.

Active Publication Date: 2014-03-26
BEIHANG UNIV
View PDF6 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Packet-based load balancing has two deficiencies: first, this solution requires additional design for data flow order preservation, and these order preservation designs often have a greater impact on the performance of multi-core processors; second, due to Most of the data packet processing needs to maintain a session table. The data packet-based load balancing system may distribute the data packets of the same data flow to different engines, which will cause two processing units to access a data structure at the same time, so requires additional synchronization overhead
[0007] The disadvantages of the data flow-based load balancing scheme are as follows: First, load balancing needs to know the load characteristics of the allocation unit. Generally, in multi-core processors, it can be assumed that the processing power required by each data packet is basically the same.
The data flow is different, the number of data packets belonging to the same data flow is unpredictable, and according to the statistics of Internet traffic, the size distribution of the data flow is very uneven; second, a data flow is composed of multiple data It is composed of packets, so the granularity of division based on data flow is relatively coarse, and it is not easy to achieve load balance

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Network processor load balancing and scheduling method based on residual task processing time compensation
  • Network processor load balancing and scheduling method based on residual task processing time compensation
  • Network processor load balancing and scheduling method based on residual task processing time compensation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0049] The present invention will be further described in detail below in conjunction with the accompanying drawings.

[0050] The method proposed by the invention is to solve the distribution of the same data stream running in parallel on the multi-core processor, and the data stream is allocated preferentially by the load of the smallest processing unit obtained.

[0051] A network processor load balancing scheduling method based on remaining task processing time compensation of the present invention, the method includes the following steps:

[0052] Step A: Associated information of the data stream

[0053] (A) A multi-core processor includes multiple processing units, expressed as M={m 1 ,m 2 ,...,m K}, m 1 Indicates the first processing unit, m 2 Indicates the second processing unit, m K Indicates the last processing unit, K indicates the total number of processing units; for the convenience of the following enumeration, m K Also denotes an arbitrary processing uni...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a network processor load balancing and scheduling method based on residual task processing time compensation. The method includes: using the processing time of each data packet as load scheduling weight, counting the residual task processing time of each processing unit, calculating the load of the each processing unit according to the task processing time, and selecting the processing unit with the minimum load to schedule the data packets. By the method, the defects of traditional load balancing algorithms based on data streams are overcome, the good multicore network processor balancing effect can be achieved.

Description

technical field [0001] The present invention relates to a scheduling method for network processors, more particularly, a load balancing scheduling method for network processors based on remaining task processing time compensation Background technique [0002] Network processor (Notwork Processor, NP) is a new generation of high-speed programmable processor used to perform data processing and forwarding. Functionally speaking, the network processor mainly completes data processing and forwarding tasks. The first edition of "Network Processor Principles and Technology" in November 2004, edited by Zhang Hongke and others, page 1. [0003] In order to improve the performance of network processors and solve the performance bottleneck of single-core processors, multi-core processors emerge as the times require. A multi-core processor is also called an on-chip multi-core processor, which integrates multiple processing cores of the same structure on the same chip. A multi-core pr...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04L12/803H04L12/863
Inventor 李云春王国栋李巍李靖轩
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products