Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

84 results about "Cache pollution" patented technology

Cache pollution describes situations where an executing computer program loads data into CPU cache unnecessarily, thus causing other useful data to be evicted from the cache into lower levels of the memory hierarchy, degrading performance. For example, in a multi-core processor, one core may replace the blocks fetched by other cores into shared cache, or prefetched blocks may replace demand-fetched blocks from the cache.

DRAM (dynamic random access memory)-NVM (non-volatile memory) hierarchical heterogeneous memory access method and system adopting software and hardware collaborative management

The invention provides a DRAM (dynamic random access memory)-NVM (non-volatile memory) hierarchical heterogeneous memory system adopting software and hardware collaborative management. According to the system, an NVM is taken as a large-capacity NVM for use while DRAM is taken as a cache of the NVM. Hardware overhead of a hierarchical heterogeneous memory architecture in traditional hardware management is eliminated through effective use of certain reserved bits in a TLB (translation lookaside buffer) and a page table structure, a cache management problem of the heterogeneous memory system is transferred to the software hierarchy, and meanwhile, the memory access delay after missing of final-stage cache is reduced. In view of the problems that many applications in a big data application environment have poorer data locality and cache pollution can be aggravated with adoption of a traditional demand-based data prefetching strategy in the DRAM cache, a Utility-Based data prefetching mechanism is adopted in the DRAM-NVM hierarchical heterogeneous memory system, whether data in the NVM are cached into the DRAM is determined according to the current memory pressure and memory access characteristics of the applications, so that the use efficiency of the DRAM cache and the use efficiency of the bandwidth from the NVM main memory to the DRAM cache are increased.
Owner:HUAZHONG UNIV OF SCI & TECH

Dram/nvm hierarchical heterogeneous memory access method and system with software-hardware cooperative management

ActiveUS20170277640A1Eliminates hardwareReducing memory access delayMemory architecture accessing/allocationMemory systemsTerm memoryPage table
The present invention provides a DRAM/NVM hierarchical heterogeneous memory system with software-hardware cooperative management schemes. In the system, NVM is used as large-capacity main memory, and DRAM is used as a cache to the NVM. Some reserved bits in the data structure of TLB and last-level page table are employed effectively to eliminate hardware costs in the conventional hardware-managed hierarchical memory architecture. The cache management in such a heterogeneous memory system is pushed to the software level. Moreover, the invention is able to reduce memory access latency in case of last-level cache misses. Considering that many applications have relatively poor data locality in big data application environments, the conventional demand-based data fetching policy for DRAM cache can aggravates cache pollution. In the present invention, an utility-based data fetching mechanism is adopted in the DRAM/NVM hierarchical memory system, and it determines whether data in the NVM should be cached in the DRAM according to current DRAM memory utilization and application memory access patterns. It improves the efficiency of the DRAM cache and bandwidth usage between the NVM main memory and the DRAM cache.
Owner:HUAZHONG UNIV OF SCI & TECH

Multi-parameter cache pollution attack detection method in content centric networking

The invention relates to content centric networking, in particular to a multi-parameter cache pollution attack detection method in the content centric networking. The multi-parameter cache pollution attack detection method in the content centric networking comprises the following steps: performing attack detection periodically and circularly, and calculating a sampling value of the attack impact degree in the current cycle; defining the variable quantity of the sampling value of the current attack impact degree to the last cycle as the variable quantity of the attack impact degree; setting a threshold according to the standard deviation of the variable quantity of the attack impact degree in a number of cycles; and determining that the attack occurs if the variable quantity of the attack impact degree exceeds the threshold. Compared with the traditional cache pollution attack detection method, the multi-parameter cache pollution attack detection method in the content centric networkingcan cope with more complex network environment, and can detect the cache pollution attack behavior in time and accurately, therefore, the multi-parameter cache pollution attack detection method in the content centric networking helps to improve the overall security performance of the content centric networking.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Threshold boundary selecting method for supporting helper thread pre-fetching distance parameters

The invention relates to a threshold boundary selecting method for supporting helper thread pre-fetching distance parameters and belongs to the technical field of memory access performance optimization of multi-core computers. The threshold boundary selecting method can be used for enhancing execution performance of irregular data intensive application. On the basis of a multi-core architecture of a shared cache, aiming to the helper thread pre-fetching distance parameters based on mixed pre-fetching and by means of introduction of the technologies of left threshold boundary selection for pre-fetching distance, right threshold boundary selection for the pre-fetching distance, threshold boundary constitution for the pre-fetching distance and the like, the threshold boundary of each pre-fetching distance parameter is automatically selected so that an optimal threshold of the pre-fetching distance parameter can be obtained within a determined boundary range, and helper thread pre-fetching control quality is improved. The method can be widely applied to irregular intensive data memory access performance optimization, pre-fetching distance threshold optimization in a helper thread pre-fetching control strategy, shared cache contamination control and other aspects.
Owner:BEIJING INSTITUTE OF TECHNOLOGYGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products