Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

78results about How to "Improve cache utilization" patented technology

DRAM (dynamic random access memory)-NVM (non-volatile memory) hierarchical heterogeneous memory access method and system adopting software and hardware collaborative management

The invention provides a DRAM (dynamic random access memory)-NVM (non-volatile memory) hierarchical heterogeneous memory system adopting software and hardware collaborative management. According to the system, an NVM is taken as a large-capacity NVM for use while DRAM is taken as a cache of the NVM. Hardware overhead of a hierarchical heterogeneous memory architecture in traditional hardware management is eliminated through effective use of certain reserved bits in a TLB (translation lookaside buffer) and a page table structure, a cache management problem of the heterogeneous memory system is transferred to the software hierarchy, and meanwhile, the memory access delay after missing of final-stage cache is reduced. In view of the problems that many applications in a big data application environment have poorer data locality and cache pollution can be aggravated with adoption of a traditional demand-based data prefetching strategy in the DRAM cache, a Utility-Based data prefetching mechanism is adopted in the DRAM-NVM hierarchical heterogeneous memory system, whether data in the NVM are cached into the DRAM is determined according to the current memory pressure and memory access characteristics of the applications, so that the use efficiency of the DRAM cache and the use efficiency of the bandwidth from the NVM main memory to the DRAM cache are increased.
Owner:HUAZHONG UNIV OF SCI & TECH

SD-RAN-based whole network collaborative content caching management system and method

ActiveCN105491156AImprove experienceAvoid calling repeatedlyTransmissionCurrent cellControl layer
The invention discloses an SD-RAN (software defined radio access network)-based whole network cooperative caching management system and method. According to the SD-RAN (software defined radio access network)-based whole network cooperative caching management system and method, a content-popularity mapping table and a wait caching/replacement list are generated through detection and content popularity statistics, and a caching decision is made according to the wait caching/replacement list, and is issued to a caching node, so that caching and update can be carried out; a real-time global caching mapping table is obtained through monitoring the caching decision; and the caching node responds to a content request and delivers corresponding content according to the content request of a user of a current cell; characteristics of limitation of content acquisition delay and the change of the popularity of different content with time and places are fully considered, and a global network view in a control layer of a software defined network is fully utilized, and therefore, the detection range of the popularity of content can be expanded; and based on whole network cooperative caching optimized content arrangement, a situation that content is repeatedly called from a remote-end content server can be avoided, and network overhead can be reduced, and user experience can be improved.
Owner:HUAZHONG UNIV OF SCI & TECH

Dram/nvm hierarchical heterogeneous memory access method and system with software-hardware cooperative management

ActiveUS20170277640A1Eliminates hardwareReducing memory access delayMemory architecture accessing/allocationMemory systemsTerm memoryPage table
The present invention provides a DRAM/NVM hierarchical heterogeneous memory system with software-hardware cooperative management schemes. In the system, NVM is used as large-capacity main memory, and DRAM is used as a cache to the NVM. Some reserved bits in the data structure of TLB and last-level page table are employed effectively to eliminate hardware costs in the conventional hardware-managed hierarchical memory architecture. The cache management in such a heterogeneous memory system is pushed to the software level. Moreover, the invention is able to reduce memory access latency in case of last-level cache misses. Considering that many applications have relatively poor data locality in big data application environments, the conventional demand-based data fetching policy for DRAM cache can aggravates cache pollution. In the present invention, an utility-based data fetching mechanism is adopted in the DRAM/NVM hierarchical memory system, and it determines whether data in the NVM should be cached in the DRAM according to current DRAM memory utilization and application memory access patterns. It improves the efficiency of the DRAM cache and bandwidth usage between the NVM main memory and the DRAM cache.
Owner:HUAZHONG UNIV OF SCI & TECH

Edge collaboration cache arrangement method based on drosophila optimization algorithm

The invention discloses an edge collaboration cache arrangement method based on a drosophila optimization algorithm. The method comprises the following steps: (1) obtaining a popular video set and a user demand vector of an area according to the historical request information of users in the area; (2) according to the popular video set and the user demand vector, establishing a target optimizationproblem with the maximization of a total video transmission delay reduction amount in the area as the target, and solving the target optimization problem based on the drosophila optimization algorithm to generate a cache arrangement decision; (3) allocating a video cache task for each cache node according to the cache arrangement decision; and (4) when a user request arrives on the cache node, ifthe cache node does not cache corresponding contents, downloading the contents from a neighbor cache node that caches the contents and has the minimal delay, and if all cache nodes in the area have no cache response contents, downloading the cache response contents from a remote server. By adoption of the edge collaboration cache arrangement method disclosed by the invention, the cache hit rate can be improved, the average video transmission delay is reduced, and the user experience quality is improved.
Owner:SOUTHEAST UNIV

Apache Solr read-write separation method and apparatus

The invention relates to an Apache Solr read-write separation method and apparatus, a computer device and a storage medium. The method comprises the steps of receiving a data writing request of a persistence program, and writing data into a writing cluster and a snapshot cluster; receiving a searching request of an Apache Solr client, and search for the data from a reading cluster and the snapshotcluster; receiving a segment merging instruction sent by the persistence program, and executing segment merging operation on the data in the writing cluster; receiving a synchronization instruction sent by the persistence program, and incrementally loading an index file from a data directory of the writing cluster to an out-of-heap memory of the reading cluster; and receiving a data cleaning instruction sent by the persistence program, and cleaning out the synchronized expired data in the snapshot cluster. By the adoption of the method, the read operation and the write operation can be separated, so that the situation that the system resources are competed is avoided, and the normal operation of a server is guaranteed; and the segment merging operation is completed before data synchronization, so that system crash caused by segment merging during synchronization is avoided.
Owner:HUNAN ANTVISION SOFTWARE

Efficient novel memory index structure processing method

The invention discloses an efficient novel memory index structure processing method. The method comprises the steps of before the skip list processing, calculating the query distribution and the datadistribution conditions through the statistical information; selecting the sentinel nodes inserted into a skip list structure; obtaining an optimal sentinel node configuration result by solving the minimum average operation cost of a skip list after the nodes are inserted; inserting the sentinel nodes into the bottom-layer skip list structure, then establishing an upper-layer CSB + tree structurefrom bottom to top through a Bulkload method after the sentinel nodes in the bottom-layer skip list structure are completely inserted, and quickly positioning the sentinel nodes; and for each piece ofdata needing to be queried or inserted, finding the nearest sentry node through the upper-layer CSB + tree structure, and starting to operate the skip list from the nearest sentry node. According tothe method, on the basis of reserving the advantages of simple implementation, good concurrency, suitability for range query and the like of a traditional skip list structure, the cache utilization rate of the whole operation process is improved, so that the memory index performance is obviously improved.
Owner:ZHEJIANG UNIV

DRAM/NVM hierarchical heterogeneous memory access method and system with software-hardware cooperative management

The present invention provides a DRAM / NVM hierarchical heterogeneous memory system with software-hardware cooperative management schemes. In the system, NVM is used as large-capacity main memory, and DRAM is used as a cache to the NVM. Some reserved bits in the data structure of TLB and last-level page table are employed effectively to eliminate hardware costs in the conventional hardware-managed hierarchical memory architecture. The cache management in such a heterogeneous memory system is pushed to the software level. Moreover, the invention is able to reduce memory access latency in case of last-level cache misses. Considering that many applications have relatively poor data locality in big data application environments, the conventional demand-based data fetching policy for DRAM cache can aggravates cache pollution. In the present invention, an utility-based data fetching mechanism is adopted in the DRAM / NVM hierarchical memory system, and it determines whether data in the NVM should be cached in the DRAM according to current DRAM memory utilization and application memory access patterns. It improves the efficiency of the DRAM cache and bandwidth usage between the NVM main memory and the DRAM cache.
Owner:HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products