High-cost-performance CDN system, and file pre-push and fragment buffer memory methods

A cost-effective, file technology, applied in the field of computer networks, can solve the problems of reducing node storage space, eliminating unpopular files, increasing back-to-source bandwidth, etc., to reduce back-to-source bandwidth and storage space, improve resource utilization, and reduce usage costs. Effect

Inactive Publication Date: 2015-09-23
BEIJING FASTWEB TECH
View PDF4 Cites 42 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the effect of serving large files is not good. The main reason is that when a client requests a large file, the connection is often interrupted before the file is downloaded. until there is a full download
Although there is an option to ignore user aborts and continue back to the source, this may waste the back-to-source bandwidth
At the same time, the request for a large file is likely to be a fragment request. At this time, Squid will directly return to the source instead of caching the file.
[0004] In addition, the cache servers in the node are scheduled by DNS, and the same URL may be evenly hit on each cache server in the node, which causes the same file to be stored repeatedly in the node, which in disguise reduces the storage space of the entire node, resulting in frequent unpopular files. is eliminated, increasing the back-to-source bandwidth

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • High-cost-performance CDN system, and file pre-push and fragment buffer memory methods
  • High-cost-performance CDN system, and file pre-push and fragment buffer memory methods
  • High-cost-performance CDN system, and file pre-push and fragment buffer memory methods

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0066] Such as Figure 1-2 As shown, the embodiment of the present invention provides a cost-effective CDN system, including:

[0067] File pre-push background: used to record the client’s file pre-push request, and also used to convert the file pre-push request and send it to one or more cache nodes;

[0068] Cache node: including a Linux virtual server and multiple cache servers, multiple cache servers are respectively connected with the Linux virtual server data;

[0069] Linux virtual server: used to select a cache server, and also used to forward the file pre-push request sent by the file pre-push background to the selected cache server;

[0070] Cache server: including a consistency Hash module, a multi-layer storage system and a return-to-source proxy module; The permanent Hash algorithm determines the address and disk number of the cache server storing the pre-push files; the multi-tier storage system is used for scheduling and storing files; the back-to-source proxy...

Embodiment 2

[0125] Such as Figure 8 As shown, the embodiment of the present invention provides a method for pre-pushing files using the cost-effective CDN system provided in Embodiment 1, including the following steps:

[0126] C1, the file pre-push background sends the file pre-push request to the Linux virtual server;

[0127] C2, the Linux virtual server forwards the file pre-push request to a cache server in the node;

[0128] C3, the cache server uses a consistent Hash algorithm to determine the address and disk number of the local cache server storing the pre-push file according to the file pre-push request URL;

[0129] C4. According to the file pre-push request, call the file pull program to pull the pre-push file back from the upper-level source and load it into the local cache server.

[0130] Such as Figure 9 As shown, in C4 of the embodiment of the present invention, the calling file pulling program to pull back the pre-push file from the upper-level source and load it in...

Embodiment 3

[0149] When a user accesses a file, if the file does not exist in the local cache service, it will be transferred to the local back-to-source proxy server to process the current request. If it is a small file, you can directly go back to the source. If it is a large file, the embodiment of the present invention provides a method for processing the back-to-source request by using the cost-effective CDN system described in Embodiment 1, including the following steps:

[0150] E1, determine whether the size information of the file has been stored in the cache server, if yes, then go to E3, otherwise, continue;

[0151] E2, use HTTP HEAD to request back to the source, return the response header, capture the Content-Length field in the response header, and determine the file size;

[0152] E3, according to the set fragment size, construct multiple fragments for the file;

[0153] E4. Determine whether the fragment exists in the local cache server. If it exists, take it out directl...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a high-cost-performance CDN system, and file pre-push and fragment buffer memory methods, and relates to the field of an IO subsystem performance improving technology in a computer network. Active pre-push of a large file is realized through a file pre-push background and a buffer memory server with a consistency Hash module, at the same time, segment buffer memory of the large file is realized by use of a back-to-source agent module, the back-to-source bandwidth and the storage space used by each file in system service are effectively reduced, the utilization rate of resources is improved, and the application cost is decreased. Besides, through a multilayer storage system, the integral storage performance of the system is effectively improved, and the service performance of the system is further enhanced.

Description

technical field [0001] The invention relates to computer networks and the technical field for improving the performance of an IO subsystem, in particular to a cost-effective CDN system and a method for file pre-push and slice cache. Background technique [0002] The full name of CDN is Content Delivery Network, that is, content distribution network. The basic idea is to avoid bottlenecks and links on the Internet that may affect the speed and stability of data transmission as much as possible, so as to make content transmission faster and more stable. By placing node servers all over the network to form a layer of intelligent virtual network on the basis of the existing Internet, the CDN system can real-time according to the network traffic and the connection of each node, the load status, the distance to the user and the response time and other comprehensive information to redirect the user's request to the service node closest to the user. Its purpose is to enable users ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H04L29/08
CPCH04L67/06H04L67/10H04L67/5681H04L67/568
Inventor 吴泽林李灵韵张敬春
Owner BEIJING FASTWEB TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products