Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A resource scheduling and allocation method and system for matching computing

A technology of resource scheduling and matching computing, which is applied in the resource scheduling method of matching computing and its system field, which can solve the problems of low flow destruction rate, adjustment allocation, poor load balancing effect, etc., to reduce the false positive rate and increase , the effect of low running overhead

Active Publication Date: 2019-05-10
GUANGDONG INST OF SCI & TECH
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The static load balancing algorithm divides traffic according to a preset strategy. The advantage is that the algorithm is simple to implement and does not generate additional operating overhead. However, since this type of algorithm does not adjust the distribution in real time in combination with the actual network load situation, the effect of load balancing relatively poor
The dynamic load balancing algorithm will dynamically distribute the traffic to each engine node according to the load conditions of each node or link during operation, and maintain the rough balance of the traffic of each node in real time. Therefore, the load balancing effect will be lower than that of the static load balancing algorithm. Much better, but the algorithm will cause additional operating overhead of the system
[0004] Therefore, most of the existing technologies use dynamic load balancing algorithms to balance the load of the system. During the dynamic load distribution process, the original flow characteristics may be destroyed. If the packets belonging to the same session are distributed to different processing nodes, this will affect the routing The processing performance of the lookup table may not be affected, but in network security applications, this may be fatal: At present, many network attacks hide attack characteristics, and the attack behavior cannot be detected based on independent packet detection technology. The attack behavior of the message can only be detected after the message is spliced ​​and reassembled. If the load balancing system cannot allocate the entire session to a processor node, the intrusion detection system will not be able to detect the attack, resulting in false positives. .Facing this problem, some scholars have proposed a session-oriented adaptive load balancing algorithm, which classifies IP packets into multiple domains, dynamically adjusts TCP traffic, and can perform dynamic load balancing without affecting session integrity. , the simulation experiment shows that the algorithm has a certain load balancing effect, and the damage to the integrity of the session is small, and the implementation of the algorithm is relatively easy, but the algorithm still lacks the proof of stability
[0005] The real network tracer has a small number of large flows but a relatively large proportion of traffic. In order to deal with this situation, a security shunt algorithm based on the adjustment of large flows appears in the prior art. After simulation experiments, the bit-stream balancing effect of this algorithm is better than Well, the stream corruption rate is relatively low, but the complexity of the algorithm is relatively high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A resource scheduling and allocation method and system for matching computing
  • A resource scheduling and allocation method and system for matching computing
  • A resource scheduling and allocation method and system for matching computing

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0051] Such as Figure 1 to Figure 2 As shown, a resource scheduling allocation method for matching computing, including:

[0052] Monitor the computing load of each engine node in real time at intervals of △t, and sort the load of each engine node;

[0053] When there is an empty or overloaded node, the data packets to be detected of the node with the heaviest load are dispatched to the node with the lightest load according to a certain ratio in units of sessions, and the nodes are traversed for load balancing adjustment.

[0054] This embodiment specifically includes the following steps:

[0055] S1: Initialize △t;

[0056] S2: capture the data packet and distribute the data packet to each detection engine node;

[0057] S3: Detect the workload of each detection engine node after △t time, and sort the load of each detection engine node at the current moment in order from heavy to light;

[0058] S4: Detect whether there is an empty or overloaded node;

[0059] S5: If ye...

Embodiment 2

[0074] Such as image 3 As shown, a resource scheduling allocation system for matching computing includes a load detector, a load analyzer, a traffic scheduler and multiple detection engines, and the load detector is connected to the detection engine and the load analyzer respectively, and the load analysis The load detector is also connected with the traffic scheduler, and the load detector is used to dynamically detect the workload conditions of each engine node at intervals Δt time, and the load analyzer detects the load conditions of each engine node at the current moment according to the order from heavy to light The sequence is sorted, and the traffic scheduler is used to schedule the data packets to be detected of the engine node with the heaviest load to the node with the lightest load in units of sessions when an overloaded node or an empty node occurs. And the sessions in the next Δt time period are distributed to the sorted engine nodes in proportion.

[0075] In t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a resource dispatching allocation method for matching calculation, which comprises the steps of monitoring the calculation load of each engine node in real time at an interval of delta t, and sorting the load of each engine node; and when a no-load or over-load node appears, dispatching a to-be-detected data packet of a node with the heaviest load at present to a node with the lightest load according to a certain proportion by taking the session as a unit, traversing the nodes and performing load balancing adjustment.

Description

technical field [0001] The invention relates to the field of network intrusion detection, in particular to a matching calculation resource scheduling method and system thereof. Background technique [0002] In a high-speed network environment, the detection engine of the Network Intrusion Detection System (NIDS) is facing a serious performance bottleneck problem, which is mainly reflected in the large increase in detection data samples caused by the high-speed network data flow, and the matching characteristics caused by the diversification of attacks. Patterns have grown considerably. Solving the performance bottleneck problem of NIDS under the high-speed network environment has become one of the hotspots in the current information security field. There are two main solutions to this problem in the existing technology: one is to optimize the detection algorithm efficiency of a single detection engine, However, limited by the processing speed of a single engine node, this m...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04L12/863H04L12/803H04L29/06H04L29/08
CPCH04L47/125H04L47/6255H04L63/123H04L67/14
Inventor 杨忠明梁本来李威常亚萍
Owner GUANGDONG INST OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products