Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Ways to reduce server cache pressure

A server cache and pressure technology, applied in the direction of instruments, special data processing applications, electrical digital data processing, etc., can solve the problems of consuming a lot of time, increasing query time, and waiting, so as to reduce cache pressure, improve robustness, and improve The effect of processing power

Active Publication Date: 2018-07-13
GUANGZHOU HUADUO NETWORK TECH
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, because the cache is allocated to different servers, it is necessary to search among multiple servers when querying the cached data, which increases the query time; in addition, if two or more servers cache the same data, they will face data synchronization problems. This takes a lot of time and keeps the client's request waiting

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Ways to reduce server cache pressure
  • Ways to reduce server cache pressure
  • Ways to reduce server cache pressure

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0019] Such as figure 1 As shown, this embodiment is described from the processing process of the server, including the following steps:

[0020] S11. After detecting that the user has logged in, read the user data recorded in the database, build a data model of the user and send it to the client, so that the client can generate the data model;

[0021] After the server detects that the user has logged in, it reads the database according to the user's ID, and records all relevant data of the user in the database, including data name, data type, and data value. According to various data information, the server builds a data model package sent to the client later.

[0022] S12. Construct a data maintenance algorithm model and send it to the client;

[0023] In this step, the server also needs to build a data maintenance algorithm model and send it to the client; the data maintenance algorithm in this embodiment refers to the algorithm used when executing certain update rules f...

Embodiment 2

[0039] Such as figure 2 As shown, this embodiment is described from the processing process of the client, including the following steps:

[0040] S21. After the user logs in, receive the data model package constructed by the server after reading the user data recorded in the database, and create a data model of the user;

[0041] S22. Receive and store the data maintenance algorithm model sent by the server;

[0042] S23. When receiving the algorithm call command sent by the server, call the corresponding data maintenance algorithm in the data maintenance algorithm model according to the algorithm call command, and update the data model;

[0043] After the client detects that the user has logged in, it obtains the data model package created by the server after obtaining the user data from the database, and creates a data model in the applied memory according to the data model package; the client also receives the data maintenance algorithm model sent by the server and stores...

Embodiment 3

[0064] Such as image 3 As shown, this embodiment is described from the interactive processing between the server and the client as an example, including the following steps:

[0065] S31, user login;

[0066] S32. The server reads the user data recorded in the database, constructs a data model package of the user, and sends it to the client;

[0067] S33. Construct a data maintenance algorithm model and send it to the client;

[0068] S34. Receive the data model package sent by the server, and create the data model of the user;

[0069] S35. Receive and store the data maintenance algorithm model sent by the server;

[0070] S36. When the data of the user is updated, send an algorithm calling command to the client;

[0071] S37. When receiving the algorithm call command sent by the server, call the corresponding data maintenance algorithm in the data maintenance algorithm model according to the algorithm call command, and update the data model.

[0072] The method of the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a method for relieving caching pressure of a server. The method includes the steps that after it is detected that a user logs in, user data recorded in a database are read, and a data model package for the user is constructed and sent to a client so that the client can generate data models; a data maintenance algorithm model is constructed and sent to the client; when data of the user are updated, an algorithm calling command is sent to the client so that the client can call corresponding data maintenance algorithms in the data maintenance algorithm model according to the algorithm calling command, and the data models of the client are updated. The method for relieving caching pressure of the server can effectively solve the problem of pressure maintenance when the server caches user data.

Description

technical field [0001] The invention relates to the technical field of server cache processing, in particular to a method for alleviating server cache pressure. Background technique [0002] With the development of the Internet today, the coverage area is getting bigger and bigger, and the coverage depth has penetrated into our basic necessities of life. At any time, the client base served by the server increases, and the processing capacity of the server is challenged. Especially in the processing of a large number of user requests, in order to speed up the processing of requests, the server no longer directly queries and writes to the database every time, but first performs cache processing, maintains a copy of data in memory for its own access, and then batches Synchronize data and write database, reduce query database and write database, which effectively improves server processing capacity. However, if the number of users is as large as one million, the caching techno...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F17/30H04L29/08
CPCG06F16/23
Inventor 郭志
Owner GUANGZHOU HUADUO NETWORK TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products