Caching method and device

A cache and cache area technology, applied in the field of network communication, can solve problems such as queue congestion, cache use waste, and inability to effectively solve QoS scheduling failures, and achieve the effect of ensuring QoS service quality and good experience

Active Publication Date: 2019-11-22
NEW H3C BIG DATA TECH CO LTD
View PDF10 Cites 19 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the disadvantage of the first solution is: cache reservation can only support a small number of queues, cannot solve the problem of abnormal WRED threshold, and cannot effectively solve the problem of QoS scheduling failure
This solution can solve the problem of QoS scheduling failure, but it causes a huge waste of cache usage. In live network applications, the cache is shared by all queues, but the queues are not necessarily all congest

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Caching method and device
  • Caching method and device
  • Caching method and device

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

[0027] Example 1

[0028] Specific, such as image 3 As shown, the specific solution process of the embodiment of the present disclosure is as follows:

[0029] A caching method includes the following steps:

[0030] S1. Divide the shared cache into multiple cache areas, each of which corresponds to a different priority type;

[0031] Among them, multiple buffer areas are respectively used for buffering and forwarding corresponding message queues, and may include two or more buffer areas. For example, it includes three buffer areas, where the first buffer area buffers packet queues without packet loss, the second buffer area buffers high priority queues, and the third buffer area buffers low priority queues. The proportion of the first cache area, the second cache area, and the third cache area in the shared cache is less than or equal to 100%.

[0032] S2. Send the message queue to the corresponding buffer area according to the sending priority of the message queue. The sending pri...

Example Embodiment

[0039] Example 2

[0040] Specific, such as Figure 4 As shown, the specific solution process of the embodiment of the present disclosure is as follows:

[0041] First, the shared buffer is divided into multiple buffer areas, and each buffer area corresponds to a different priority type; then, according to the sending priority of the message queue, the message queue is sent to the corresponding buffer area. According to the queue priority and possible congestion status, the sending priority of the queue is divided into 3 types: no packet loss queue, high priority queue, and low priority queue.

[0042] No packet loss queue: Set as the queue aggregation group Group A in the following table. This kind of queue is not many and is provided for the internal protocol packets of the device. In order not to cause protocol oscillations, this part of the queue cannot be lost and delayed. It is too large, so it is generally not involved in QoS scheduling. As long as the message is queued, it ...

Example Embodiment

[0060] Example 3

[0061] Such as Image 6 As shown, a cache device includes:

[0062] The partition module 401 is configured to divide the shared cache into multiple cache areas, and each cache area corresponds to a different priority type. Among them, multiple buffer areas are respectively used for buffering and forwarding corresponding message queues. Among them, the first buffer area buffers no packet loss message queue, the second buffer area buffers high priority queues, and the third buffer area buffers low priority queues. queue. The proportion of the first cache area, the second cache area, and the third cache area in the shared cache is less than or equal to 100%.

[0063] The sending module 402 is configured to send the message queue to the corresponding buffer area according to the sending priority of the message queue. Sending priority includes: no packet loss, high priority, low priority.

[0064] If the ratio of the length of the message queue to the capacity of the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a caching method and device, which are applied to a shared cache of network communication equipment, and the method comprises the following steps: dividing the shared cache into a plurality of cache regions, each cache region corresponding to a different priority type; and sending the message queue to a corresponding cache region according to the sending priority of the message queue. According to the invention, the shared cache resources are utilized more reasonably and efficiently, queue priority scheduling of different service types can be guaranteed, even if the total cache size is not large, the low-priority messages do not occupy too much cache to occupy the enqueue space of the high-priority messages, the QoS service quality is normally guaranteed, and betterexperience is brought to the internet surfing of a user.

Description

technical field [0001] The present disclosure relates to the technical field of network communication, and in particular to a caching method and device. Background technique [0002] WRED (Weighted Random Early Detection) is a method that monitors the usage of network resources (such as queues or memory buffers), actively discards packets when congestion tends to intensify, and adjusts network traffic. A flow control mechanism that relieves network overload. Its implementation depends on the size of the cache area. In reality, there is no limitless cache area size. Once the total cache area size is exceeded, the WRED mechanism will be abnormal. At the same time, the QoS (Quality of Service, quality of service) supported by WRED ) Queue scheduling will also deviate. [0003] The normal forwarding queue basically does not cache packets, and the traffic will be scheduled according to specific QoS priorities, such as figure 1 As shown, each queue will set the maximum cache us...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): H04L12/851H04L12/863H04L12/865H04L47/6275
CPCH04L47/24H04L47/6215H04L47/6275
Inventor 寇远芳
Owner NEW H3C BIG DATA TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products