Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Network cache design method based on consistent hash

A design method and consistent technology, applied in the field of computer networks, can solve problems such as large system load, drastic changes in cached data resources, mismatch between cached data and servers, and improve IO performance.

Inactive Publication Date: 2015-10-28
SHANDONG CHAOYUE DATA CONTROL ELECTRONICS CO LTD
View PDF2 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The algorithm is very simple, but if the number of servers is increased, the original cached data will be remapped, causing the server cluster to read a large number of hard disk data, resulting in a large system load
Therefore, the traditional cache design scheme cannot dynamically expand the existing cluster
[0004] If there is a single server node failure in the server cluster, removing the node will also cause a mismatch between the cached data and the server, thus causing drastic changes in cached data resources and affecting the overall performance of the system

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Network cache design method based on consistent hash
  • Network cache design method based on consistent hash
  • Network cache design method based on consistent hash

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0025] A network cache design method based on consistent hash. First, the cache of the server cluster is centralized and put into a unified resource pool. All resources in the resource pool can be used by any resource scheduling, and NUMA is used for cache management; the hash value put in a 2 m In the ring space, a single server is also regarded as an object for hash calculation and put into the hash ring; then the data blocks Object A, B, C and D are calculated by the hash algorithm to correspond to the key value, and the value is put into the In the hash space; through the above two operations, the server node and the data block are mapped in the same hash space, and then the data block is put into the corresponding server node in a clockwise direction, such as figure 2 , image 3 shown.

[0026] A server cluster is a collection of multiple servers to provide services, but it appears to the client as one server. Clusters can be operated in parallel, and clustered operat...

Embodiment 2

[0029] On the basis of Embodiment 1, in this embodiment, for the operation of any cluster, there is a case where a single server is down, but server downtime cannot affect the performance and services of the cluster, so it is necessary to delete the server If a server in the server cluster goes down and its cache content becomes invalid, which affects the work of all servers, then cache migration will occur; similarly, when the number of server clusters does not meet the requirements, it cannot provide To provide sufficient services from the outside world, it is necessary to increase the number of nodes for expansion. The process is as follows:

[0030] If the N2 node goes down, the node fails, such as Figure 4 As shown, the original data blocks A and B pointing to N2 need to be re-cached to N3, and cache migration will occur, but data blocks C and D are still cached in the original node, so all data will not be migrated, reducing system resources consumption;

[0031] If t...

Embodiment 3

[0033] On the basis of Embodiment 1 or 2, the method described in this embodiment obtains the virtual node through the algorithm of the physical node, and maps the virtual node to the hash space. The addition of the virtual node can avoid a large amount of data cache in a single server node, A balanced load of data is achieved.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a network cache design method based on consistent hash. The method comprises the following steps that the caches of a server cluster are centrally put in a unified resource pool, all resources in the resource pool can be scheduled by any one resource, and an NUMA (Non Uniform Memory Access Architecture) is adopted to manage the caches; a hash value is put in an annular space of 2 m, and the single server is regarded as an object to carry out hash calculation to be put in a hash ring; and a corresponding key value is computed by a data block through a hash algorithm and is put into a hash space. The technology of the invention is used for buffer sharing and buffer balancing of the server cluster, the large-scale data access speed can be accelerated, the influence of single-node cache invalidation can be eliminated, the IO (Input Output) performance of the server cluster can be effectively improved, a unified storage space is not only provided, but the caches can be relatively uniformly distributed in each server node.

Description

technical field [0001] The invention relates to the technical field of computer networks, in particular to a consistent hash-based network cache design method, which is used for buffer sharing and cache balancing of server clusters, and mainly accelerates large-scale data access speed and eliminates the influence of single-node cache failure. The invention can effectively improve the IO performance of the server cluster, not only provide a unified storage space, but also relatively evenly distribute the cache to each server node. Background technique [0002] In a traditional server cluster, in order to achieve load balancing, hash modulo is performed on N servers, and data requests with a key value of X are allocated or routed to the server corresponding to hash (X) mod N. [0003] This algorithm is very simple, but if the number of servers is increased, the original cached data will be remapped, causing the server cluster to read a large amount of hard disk data, resulting...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04L29/08
CPCH04L67/1001H04L67/568
Inventor 张凡凡吴登勇李保来
Owner SHANDONG CHAOYUE DATA CONTROL ELECTRONICS CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products