[0015] Example:
[0016] The application method for resource process state management based on a state-driven engine in this embodiment uses a management table for resource process state management, including two steps of cache penetration and cache concurrency; the main contents of the cache penetration include: using The cache first checks whether there is a primary state in the resource cache, and if so, returns the cached content directly, then queries the database, and then queries the transition state scenario and returns the result; if a primary node resource queried does not exist in the cache, it does not provide feedback for calling the query database the transitional scene state.
[0017] The application method for resource process state management described in this embodiment is then buffered concurrently. If the cache is invalid, multiple threads concurrently call the database to query the main state of the resource and place it in the cache, then there are multiple resources in the cache with the same or different main states. recurs. Also, each main state's subordinate transition states is a set.
[0018] The management table includes 4 linked lists, the first two linked lists are obvious, one is the LRU linked list, which represents the most recently used page linked list; the other is the LFU linked list, which represents the most recently used page linked list; the other two linked lists are used for storing The page information that has been eliminated recently is called the ghost linked list. One is the LRUghost linked list, which stores the information of the pages recently eliminated from the most recently used linked list, and the other is the LFUghost linked list, which stores the information of the pages recently eliminated from the most frequently used linked list.
[0019] Through the application method of resource process state management described in this embodiment, resources in the cache will be moved to the LFU linked list; all resources accessed at least twice enter pages in the LRU linked list. If a page in the LFU list is accessed again, it will be placed at the beginning of the LFU list (most frequently used), so that those pages that are really frequently accessed will stay in the cache forever, and pages that are not frequently accessed will be sent to the The tail of the linked list moves and is eventually eliminated. As time goes on, these two linked lists are continuously filled, and the cache is filled accordingly; if the cache is full and a page that is not cached is read, a page must be eliminated from the cache to provide the new page read. Location. The page that has just been eliminated from the cache is not referenced by any non-ghost linked list in the cache. If the LRU linked list is full, the least recently used pages in the LRU linked list will be eliminated, and the eliminated pages will be put into the LRUghost linked list.
[0020] If a hit occurs in the LFUghost linked list, the length of the LRU linked list will be reduced, and a free space will be added to the LFU linked list. In this way, the ARC algorithm adapts to the workload. If the workload tends to access recently accessed files, more hits will occur in the LRUghost list, increasing the LRU cache space. If the workload tends to access recently frequently accessed files, more hits will occur in the LFUghost list, and the LFU cache space will increase.
[0021] In the application method of resource process state management described in this embodiment, the subordinate transition state of each main state is a set, and the set is between the LRU and the LFU. In order to improve the effect, the set consists of two LRUs , the first LRU becomes L1 and contains entries that have been used only once recently, while the second LRU is called L2 and contains entries that have been used twice recently; that is, L1 puts new objects, and L2 puts Commonly used objects. Match through the cached object model, and output the unique transition state matching result object to make a procedure call.