Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

81 results about "Dram cache" patented technology

DRAM is typically your system's main memory. Cache memory is typically a small volume of very expensive high-performance SRAM that can be accessed and written to by the CPU much faster than main memory access.

DRAM (dynamic random access memory)-NVM (non-volatile memory) hierarchical heterogeneous memory access method and system adopting software and hardware collaborative management

The invention provides a DRAM (dynamic random access memory)-NVM (non-volatile memory) hierarchical heterogeneous memory system adopting software and hardware collaborative management. According to the system, an NVM is taken as a large-capacity NVM for use while DRAM is taken as a cache of the NVM. Hardware overhead of a hierarchical heterogeneous memory architecture in traditional hardware management is eliminated through effective use of certain reserved bits in a TLB (translation lookaside buffer) and a page table structure, a cache management problem of the heterogeneous memory system is transferred to the software hierarchy, and meanwhile, the memory access delay after missing of final-stage cache is reduced. In view of the problems that many applications in a big data application environment have poorer data locality and cache pollution can be aggravated with adoption of a traditional demand-based data prefetching strategy in the DRAM cache, a Utility-Based data prefetching mechanism is adopted in the DRAM-NVM hierarchical heterogeneous memory system, whether data in the NVM are cached into the DRAM is determined according to the current memory pressure and memory access characteristics of the applications, so that the use efficiency of the DRAM cache and the use efficiency of the bandwidth from the NVM main memory to the DRAM cache are increased.
Owner:HUAZHONG UNIV OF SCI & TECH

Dram/nvm hierarchical heterogeneous memory access method and system with software-hardware cooperative management

ActiveUS20170277640A1Eliminates hardwareReducing memory access delayMemory architecture accessing/allocationMemory systemsTerm memoryPage table
The present invention provides a DRAM/NVM hierarchical heterogeneous memory system with software-hardware cooperative management schemes. In the system, NVM is used as large-capacity main memory, and DRAM is used as a cache to the NVM. Some reserved bits in the data structure of TLB and last-level page table are employed effectively to eliminate hardware costs in the conventional hardware-managed hierarchical memory architecture. The cache management in such a heterogeneous memory system is pushed to the software level. Moreover, the invention is able to reduce memory access latency in case of last-level cache misses. Considering that many applications have relatively poor data locality in big data application environments, the conventional demand-based data fetching policy for DRAM cache can aggravates cache pollution. In the present invention, an utility-based data fetching mechanism is adopted in the DRAM/NVM hierarchical memory system, and it determines whether data in the NVM should be cached in the DRAM according to current DRAM memory utilization and application memory access patterns. It improves the efficiency of the DRAM cache and bandwidth usage between the NVM main memory and the DRAM cache.
Owner:HUAZHONG UNIV OF SCI & TECH

NVMM: An Extremely Large, Logically Unified, Sequentially Consistent Main-Memory System

Embodiments of both a non-volatile main memory (NVMM) single node and a multi-node computing system are disclosed. One embodiment of the NVMM single node system has a cache subsystem composed of all DRAM, a large main memory subsystem of all NAND flash, and provides different address-mapping policies for each software application. The NVMM memory controller provides high, sustained bandwidths for client processor requests, by managing the DRAM cache as a large, highly banked system with multiple ranks and multiple DRAM channels, and large cache blocks to accommodate large NAND flash pages. Multi-node systems organize the NVMM single nodes in a large inter-connected cache/flash main memory low-latency network. The entire interconnected flash system exports a single address space to the client processors and, like a unified cache, the flash system is shared in a way that can be divided unevenly among its client processors: client processors that need more memory resources receive it at the expense of processors that need less storage. Multi-node systems have numerous configurations, from board-area networks, to multi-board networks, and all nodes are connected in various Moore graph topologies. Overall, the disclosed memory architecture dissipates less power per GB than traditional DRAM architectures, uses an extremely large solid-state capacity of a terabyte or more of main memory per CPU socket, with a cost-per-bit approaching that of NAND flash memory, and performance approaching that of an all DRAM system.
Owner:JACOB BRUCE LEDLEY

A new USB protocol based computer acceleration device using multi I/O channel SLC NAND and DRAM cache

This study presents a new USB protocol based computer acceleration device that uses multi-channel single-level cell NAND type flash memory (SLC NAND) and Dynamic random-access memory (DRAM) cache. This device includes a main controller chip, at least one SLC NAND module, and a USB interface to connect the device to a computer. It then creates and assigns a cache file in SLC NAND and DRAM for the computer cache system, caches the common used applications, and read and pre-reads frequently used files. The device drive improves the USB protocol, optimizes the BOT protocol in the traditional USB interface protocol, and optimizes resource allocation for the USB transport protocol.The algorithm and framework of the device employ the following design:1. The device virtualizes the application programs for pre-storing all program files and the system environment files required by programs into the device.2. The device works in multi I / O channel mode, an array module integrates an array of SLC NAND chips and uses main controller chip that can deal with multi I / O channel.3. By monitoring long-term user habits, data that will be used by system can be estimated, and the data can be pre-stored in the device.4. The device allows intelligent compression and automatic release of system memory in background.
Owner:WEIJIA ZHANG

Novel USB protocol computer accelerating device based on multi-channel SLC NAND and DRAM cache memory

The invention relates to a novel USB protocol computer accelerating device based on the multi-channel SLC NAND and the DRAM cache memory. The novel USB protocol computer accelerating device comprises a main control chip and an SLCNAND module and is provided with a USB interface connected with a computer. Cache files, a cache system and regularly-used files of application programs are established for the computer and distributed in the SLCNAND and a DRAM and scattered files which are frequently read and written are used as high-speed cache. Meanwhile, a device driver improves a USB protocol. A BOT protocol in a traditional USB interface protocol is optimized and resource distribution and optimization is performed on a USB transport protocol. The algorithm and the frame of the device adopt the design which includes that firstly, virtualization is carried out on application programs by the device, and therefore, all program files and system environment files needed by the programs are pre-stored in the device; secondly, a multi-channel mode is adopted, and an array module integrates multiple SLCNANA chips and adopts multi-channel main control; thirdly, data to be used are judged by the system by monitoring habits of users for a long time and pre-stored in the device; fourthly, intelligent compressing and backstage automatic releasing are performed on system memory.
Owner:张维加
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products