Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

673 results about "Cache hit rate" patented technology

The hit rate is the number of cache hits divided by the total number of memory requests over a given time interval. The value is expressed as a percentage: The miss rate is similar in form: the total cache misses divided by the total number of memory requests expressed as a percentage over a time interval.

Method and apparatus for prefetching recursive data structures

Computer systems are typically designed with multiple levels of memory hierarchy. Prefetching has been employed to overcome the latency of fetching data or instructions from or to memory. Prefetching works well for data structures with regular memory access patterns, but less so for data structures such as trees, hash tables, and other structures in which the datum that will be used is not known a priori. The present invention significantly increases the cache hit rates of many important data structure traversals, and thereby the potential throughput of the computer system and application in which it is employed. The invention is applicable to those data structure accesses in which the traversal path is dynamically determined. The invention does this by aggregating traversal requests and then pipelining the traversal of aggregated requests on the data structure. Once enough traversal requests have been accumulated so that most of the memory latency can be hidden by prefetching the accumulated requests, the data structure is traversed by performing software pipelining on some or all of the accumulated requests. As requests are completed and retired from the set of requests that are being traversed, additional accumulated requests are added to that set. This process is repeated until either an upper threshold of processed requests or a lower threshold of residual accumulated requests has been reached. At that point, the traversal results may be processed.
Owner:DIGITAL CACHE LLC +1

Method for prefetching recursive data structure traversals

Computer systems are typically designed with multiple levels of memory hierarchy. Prefetching has been employed to overcome the latency of fetching data or instructions from or to memory. Prefetching works well for data structures with regular memory access patterns, but less so for data structures such as trees, hash tables, and other structures in which the datum that will be used is not known a priori. In modern transaction processing systems, database servers, operating systems, and other commercial and engineering applications, information is frequently organized in trees, graphs, and linked lists. Lack of spatial locality results in a high probability that a miss will be incurred at each cache in the memory hierarchy. Each cache miss causes the processor to stall while the referenced value is fetched from lower levels of the memory hierarchy. Because this is likely to be the case for a significant fraction of the nodes traversed in the data structure, processor utilization suffers. The inability to compute the address of the next address to be referenced makes prefetching difficult in such applications. The invention allows compilers and/or programmers to restructure data structures and traversals so that pointers are dereferenced in a pipelined manner, thereby making it possible to schedule prefetch operations in a consistent fashion. The present invention significantly increases the cache hit rates of many important data structure traversals, and thereby the potential throughput of the computer system and application in which it is employed. For data structure traversals in which the traversal path may be predetermined, a transformation is performed on the data structure that permits references to nodes that will be traversed in the future be computed sufficiently far in advance to prefetch the data into cache.
Owner:DIGITAL CACHE LLC

Mixed storage system and method for supporting solid-state disk cache dynamic distribution

The invention provides a mixed storage system and method for supporting solid-state disk cache dynamic distribution. The mixed storage method is characterized in that the mixed storage system is constructed through a solid-state disk and a magnetic disk, and the solid-state disk serves as a cache of the magnetic disk; the load characteristics of applications and the cache hit ratio of the solid-state disk are monitored in real time, performance models of the applications are built, and the cache space of the solid-state disk is dynamically distributed according to the performance requirements of the applications and changes of the load characteristics. According to the solid-state disk cache management method, the cache space of the solid-state disk can be reasonably distributed according to the performance requirements of the applications, and an application-level cache partition service is achieved; due to the fact that the cache space of the solid-state disk of the applications is further divided into a cache reading section and a cache writing section, dirty data blocks and the page copying and rubbish recycling cost caused by the dirty data blocks are reduced; meanwhile, the idle cache space of the solid-state disk is distributed to the applications according to the cache use efficiency of the applications, and therefore the cache hit ratio of the solid-state disk of the mixed storage system and the overall performance of the mixed storage system are improved.
Owner:HUAZHONG UNIV OF SCI & TECH

Cooperative caching method based on popularity prediction in named data networking

ActiveCN106131182AFast pushSatisfy low frequency requestsTransmissionCache hit rateDistributed computing
The invention discloses a cooperative caching method based on popularity prediction in a named data networking. When storing a content in the named data networking (NDN), the application of a strategy of caching at multiple parts along a path or a strategy of caching at important nodes causes high redundancy of node data and low utilization rate of a caching space in the networking. The method provided by the invention uses a 'partial cooperative caching mode', firstly, the content is subjected to future popularity prediction, then, the caching space of an optimal proportion is segmented from each node to be used as a local caching space for storing the content of high popularity. The rest part of each node stores the content of relatively low popularity through a neighborhood cooperation mode. An optimal space division proportion is calculated through considering hop count of an interest packet in node kit and server side request hit in the networking. Compared with the conventional caching strategy, the method provided by the invention increases the utilization rate of the caching space in the networking, reduces caching redundancy in the networking, improves a caching hit rate of the nodes in the networking, and promotes performance of the whole networking.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Memory caching method oriented to range querying on Hadoop

InactiveCN103942289AAdaptive Query RequirementsImprove query cache hit ratioSpecial data processing applicationsCache hit rateSelf adaptive
The invention discloses a memory caching method oriented to range querying on Hadoop. The memory caching method oriented to the range querying on the Hadoop comprises the following steps that (1) an index is established on querying attributes of Hadoop mass data and is stored on an Hbase; (2) a memory is established on index data of the Hbase to conduct fragment caching, the frequently-accessed index data are selected and stored in the memory, data fragments are fragmented in an initial stage by adopting a fixed length equal dividing method, and the mass data fragments are organized by adopting a skiplist; (3) hit data are queried and recorded according to the data, and the heat of the data fragments is measured by adopting an exponential smoothing method; (4) a memory cache is updated. The memory caching method oriented to the range querying on the Hadoop has the advantages that the structure of combining the skiplist and a collection is adopted, the dynamic adjustment of the fragment boundary of the collection is supported on the structure, the data fragments are made to be adaptive to querying demands, the querying cache hit rate of hot data fragments is improved, the overhead of a querying accessed disk is lowered, and thus the performance of the range querying is improved greatly.
Owner:GUANGXI NORMAL UNIV

Method for balancing load of network GIS heterogeneous cluster server

The invention discloses a method for balancing a load of a network GIS (Geographic Information System) heterogeneous cluster server. In the method, built-in attributes which accord with the Zipf distribution rule and a server heterogeneous processing capacity are accessed on the basis of GIS data; the method adapts to dense access of a user in the aspect of the cluster caching distribution; when the cache hit rate is improved, the access load of hot spot data is balanced; the minimum processing cost of a cluster system, which is required by a data request service, is solved from the integral performance of a heterogeneous cluster service system, and user access response time is optimized when the load of the heterogeneous cluster server is balanced; and the distribution processing is carried out on the basis of data request contents and the access load of the hot spot data is prevented from being excessively centralized. The method disclosed by the invention highly accords with the large-scale user highly-clustered access characteristic in a network GIS, well coordinates and balances the relation between the load distribution and the access local control, ensures the service efficiency and the optimization of the load, effectively promotes the service performance of the actual network GIS and the utilization efficiency of the heterogeneous cluster service system.
Owner:WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products