Unlock instant, AI-driven research and patent intelligence for your innovation.

Automatically managed terminal file partition cache system and its working method

A technology of automatic management and caching system, applied in transmission systems, electrical digital data processing, special data processing applications, etc., can solve problems such as large memory consumption and low loading performance, and achieve improved loading performance, low hardware performance requirements, and improved performance. The effect of performance bottlenecks

Active Publication Date: 2018-08-14
STATE GRID CORP OF CHINA +2
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0002] The large-scale data collection service uses multiple service processors according to the reported data scale, and each processor loads terminal file data in the cache. Due to the large scale of access terminals, each server caches all terminal files, resulting in low loading performance and excessive memory consumption. Big

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Automatically managed terminal file partition cache system and its working method
  • Automatically managed terminal file partition cache system and its working method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0019] The technical solution of the present invention will be further described in detail below in conjunction with the accompanying drawings.

[0020] Such as figure 2 As shown, the present invention includes a communication front-end processor responsible for communication scheduling, original message storage, and communication traffic statistics, a service processor responsible for protocol encapsulation analysis, and collection and data storage, and a service processor for managing service processors and access The unit code manager and the master station of the terminal relationship; the business processor is connected with the unit code manager and the communication front-end processor; the master station is connected with the communication front-end processor, the business processor, and the unit code manager; the unit code manager is equipped with a database , the unit code manager automatically assigns the access terminal to the service processor according to the lo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a self-management terminal file partitioned cache system, and relates to an automatic distribution management system and a work method thereof. In large-scale data collection services, each server caches all terminal files, and consequently the loading performance is poor and memory consumption is excessive. The self-management terminal file partitioned cache system is characterized in that the self-management terminal file partitioned cache system comprises a communication front-end processor, a service processor, a unit code manager and a master station; the service processor is connected with the unit code manager and the communication front-end processor; the master station is connected with the communication front-end processor, the service processor and the unit code manager; the unit code manager is provided with a databank, the unit code manager is used for automatically distributing access terminals to the service processor according to a load balancing strategy, the service processor loads a terminal file distributed to the service processor in a cache, and the front-end processor sends data to the corresponding service processor according to affiliation of the terminals. According to the technical scheme, the service processor loads the terminal file distributed to the service processor only in the cache and does not cache all terminal files, and therefore the loading performance is improved and the memory consumption is reduced.

Description

technical field [0001] The invention relates to an automatic distribution management system and its working method. Background technique [0002] The large-scale data collection service uses multiple service processors according to the reported data scale, and each processor loads terminal file data in the cache. Due to the large scale of access terminals, each server caches all terminal files, resulting in low loading performance and excessive memory consumption. Big. Contents of the invention [0003] The technical problem to be solved and the technical task proposed by the present invention are to perfect and improve the existing technical solutions, and provide an automatically managed terminal file partition cache system to achieve the purpose of improving loading performance and reducing memory consumption. For this reason, the present invention takes the following technical solutions. [0004] The automatically managed terminal file partition cache system is chara...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04L29/08G06F17/30
Inventor 蒋鸿城李熊王中原裘炜浩王志强洪建光裴旭斌龚小刚吴凯峰崔蔚陈清泰
Owner STATE GRID CORP OF CHINA