Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

428 results about "Hit ratio" patented technology

The hit ratio is the fraction of accesses which are a hit. The miss ratio is the fraction of accesses which are a miss. The (hit/miss) latency (AKA access time) is the time it takes to fetch the data in case of a hit/miss. If the access was a hit - this time is rather short because the data is already in the cache.

Method and apparatus for prefetching recursive data structures

Computer systems are typically designed with multiple levels of memory hierarchy. Prefetching has been employed to overcome the latency of fetching data or instructions from or to memory. Prefetching works well for data structures with regular memory access patterns, but less so for data structures such as trees, hash tables, and other structures in which the datum that will be used is not known a priori. The present invention significantly increases the cache hit rates of many important data structure traversals, and thereby the potential throughput of the computer system and application in which it is employed. The invention is applicable to those data structure accesses in which the traversal path is dynamically determined. The invention does this by aggregating traversal requests and then pipelining the traversal of aggregated requests on the data structure. Once enough traversal requests have been accumulated so that most of the memory latency can be hidden by prefetching the accumulated requests, the data structure is traversed by performing software pipelining on some or all of the accumulated requests. As requests are completed and retired from the set of requests that are being traversed, additional accumulated requests are added to that set. This process is repeated until either an upper threshold of processed requests or a lower threshold of residual accumulated requests has been reached. At that point, the traversal results may be processed.
Owner:DIGITAL CACHE LLC +1

Method for prefetching recursive data structure traversals

Computer systems are typically designed with multiple levels of memory hierarchy. Prefetching has been employed to overcome the latency of fetching data or instructions from or to memory. Prefetching works well for data structures with regular memory access patterns, but less so for data structures such as trees, hash tables, and other structures in which the datum that will be used is not known a priori. In modern transaction processing systems, database servers, operating systems, and other commercial and engineering applications, information is frequently organized in trees, graphs, and linked lists. Lack of spatial locality results in a high probability that a miss will be incurred at each cache in the memory hierarchy. Each cache miss causes the processor to stall while the referenced value is fetched from lower levels of the memory hierarchy. Because this is likely to be the case for a significant fraction of the nodes traversed in the data structure, processor utilization suffers. The inability to compute the address of the next address to be referenced makes prefetching difficult in such applications. The invention allows compilers and/or programmers to restructure data structures and traversals so that pointers are dereferenced in a pipelined manner, thereby making it possible to schedule prefetch operations in a consistent fashion. The present invention significantly increases the cache hit rates of many important data structure traversals, and thereby the potential throughput of the computer system and application in which it is employed. For data structure traversals in which the traversal path may be predetermined, a transformation is performed on the data structure that permits references to nodes that will be traversed in the future be computed sufficiently far in advance to prefetch the data into cache.
Owner:DIGITAL CACHE LLC

Mixed storage system and method for supporting solid-state disk cache dynamic distribution

The invention provides a mixed storage system and method for supporting solid-state disk cache dynamic distribution. The mixed storage method is characterized in that the mixed storage system is constructed through a solid-state disk and a magnetic disk, and the solid-state disk serves as a cache of the magnetic disk; the load characteristics of applications and the cache hit ratio of the solid-state disk are monitored in real time, performance models of the applications are built, and the cache space of the solid-state disk is dynamically distributed according to the performance requirements of the applications and changes of the load characteristics. According to the solid-state disk cache management method, the cache space of the solid-state disk can be reasonably distributed according to the performance requirements of the applications, and an application-level cache partition service is achieved; due to the fact that the cache space of the solid-state disk of the applications is further divided into a cache reading section and a cache writing section, dirty data blocks and the page copying and rubbish recycling cost caused by the dirty data blocks are reduced; meanwhile, the idle cache space of the solid-state disk is distributed to the applications according to the cache use efficiency of the applications, and therefore the cache hit ratio of the solid-state disk of the mixed storage system and the overall performance of the mixed storage system are improved.
Owner:HUAZHONG UNIV OF SCI & TECH

Cache management method for solid-state disc

The invention discloses a cache management method for a solid-state disc. The method comprises the following implement steps that a page cache, a replace block module, a new page linked list, a physical block chain list and a physical page state list are established; an input and output (IO) request from a host is received and is executed through the page cache, when a writing request is executed, if the page cache is missed, and the page cache has no spare space, a block replace process of the solid-state disc is executed, namely an 'effective' page space in the page cache is preferential released; when the number of 'effective' pages in the page cache is zero, a candidate replace block with the largest failure ratio in a rear half physical block of the physical block chain list is selected to serve as a replace block, and the replace block cache is utilized to execute a replace writing process. The cache management method for the solid-state disc can effectively use a limited cache space and increase hit rate of the cache, enables a block written in a flash medium to comprise as many dirty data pages as possible and as few effective data pages as possible to reduce erasure operation and page copy operations and sequential rubbish recovery caused by the dirty data pages. The cache management method for the solid-state disc is easy to operate.
Owner:湖南长城银河科技有限公司

Data page caching method for file system of solid-state hard disc

The invention discloses a data page caching method for a file system of a solid-state hard disc, which comprises the following implementation steps of: (1) establishing a buffer link list used for caching data pages in a high-speed cache; (2) caching the data pages read in the solid-state hard disc in the buffer link list for access, classifying the data pages in the buffer link list into cold clean pages, hot clean pages, cold dirty pages and hot dirty pages in real time according to the access states and write access states of the data pages; (3) firstly searching a data page as a page to be replaced in the buffer link list according to the priority of the cold clean pages, the hot clean pages, the cold dirty pages and the hot dirty pages, and replacing the page to be replaced with a new data page read from the solid-state hard disc when a free space does not exist in the buffer link list. In the invention, the characteristics of the solid-state hard disc can be sufficiently utilized, the performance bottlenecks of the external storage can be effectively relieved, and the storage processing performance of the system can be improved; moreover, the data page caching method has the advantages of good I/O (Input/Output) performance, low replacement cost for cached pages, low expense and high hit rate.
Owner:NAT UNIV OF DEFENSE TECH

Address mapping method for flash translation layer of solid state drive

The invention discloses an address mapping method for a flash translation layer of a solid state drive. The method comprises the following steps of (1) establishing a cached mapping table, a cached split table, a cached translation table and a global translation directory in an SRAM (static random access memory) in advance; (2) receiving an IO (input/output) request, turning to a step (3) if the IO request is a write request, otherwise turning to a step (4); (3) preferentially and sequentially searching the tables in the SRAM for the hit condition of the current IO request, finishing write operation according to hit mapping information, and caching the mapping information according to a hit type and a threshold value; (4) preferentially searching the tables in the SRAM for the hit condition of the current IO request, and finishing read operation according to the hit mapping information in the SRAM. The method has the advantages that the random write performance of the solid state drive can be improved, the service life of the solid state drive can be prolonged, the efficiency of the flash translation layer is high, the hit ratio of address mapping information in the SRAM is high, and less additional read-write operation between the SRAM and the solid state drive Flash is realized.
Owner:NAT UNIV OF DEFENSE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products