Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

69results about How to "Improve memory access performance" patented technology

Cache replacement method under heterogeneous memory environment

ActiveCN104834608AComprehensive consideration of memory access characteristicsIncrease space sizeMemory adressing/allocation/relocationHardware structurePhase-change memory
The invention discloses a cache replacement method under heterogeneous memory environment. The method is characterized by comprising the steps: adding one source flag bit in a cache line hardware structure for flagging whether cache line data is derived from a DRAM (Dynamic Random Access Memory) or a PCM (Phase Change Memory); adding a new sample storage unit in a CPU (Central Processing Unit) for recording program cache access behaviors and data reusing range information; the method also comprises three sub methods including a sampling method, an equivalent position calculation method and a replacement method, wherein the sampling sub method is used for performing sampling statistics on the cache access behaviors; the equivalent position calculation sub method is used for calculating equivalent positions, and the replacement sub method is used for determining a cache line needing to be replaced. According to the cache replacement method, for the cache access characteristic of a program under the heterogeneous memory environment, a traditional cache replacement policy is optimized, the high time delay cost that the PCM needs to be accessed due to cache missing can be reduced by implementing the cache replacement method, and thus the cache access performance of a whole system is improved.
Owner:HUAZHONG UNIV OF SCI & TECH

Access-memory control device and method of memorizer, processor and north-bridge chip

Provided are an access-memory control device and method of a memorizer, a processor and a north-bridge chip. The access-memory control device of the memorizer comprises an analysis requesting unit for accessing and analyzing a request to form an operation command sequence including a plurality of operation commands, and an arbitration unit for performing arbitration of the operation commands in the operation command sequence according to arbitration conditions so as to send the operation commands to the memorizer. Compared with the prior art, the access-memory control device concurrently sends the operation command sequence through the analysis requesting unit and utilizes first temporal constraint, second temporal constraint and third temporal constraint to control a time interval from the sending time of an existing operation command to the sending time of a previous operation command adjacent to the existing operation command in the same operation command sequence, and concurrent accessing of multiple memorizers can be performed. In addition, multiple memorizer groups can be concurrently accessed, multi-dimensional concurrent accessing is achieved, the average handling time of request accessing and memorizing is remarkably shortened, and the overall access-memory performance of a system is improved.
Owner:JIANGNAN INST OF COMPUTING TECH

Command cancel-based cache production line lock-step concurrent execution method

The invention discloses a command cancel-based cache production line lock-step concurrent execution method, which is implemented through the following steps of: (1) performing lock-step concurrent execution by a consistency engine and a last-level cache according to a beat number appointed by a production line and receiving a message from a consistency cache respectively; (2) judging, by the consistency engine, whether the message hits the consistency cache, and judging, by the last-level cache, whether the message hits the last-level cache; and (3) judging, by the consistency engine, whether the last-level cache is required to be accessed, transmitting, by the consistency engine, a command confirmation signal to the last-level cache if the last-level cache is required to be accessed, allowing the last-level cache to access an off-chip memory, if the last-level cache is not required to be accessed, transmitting, by the consistency engine, a command cancel signal to the last-level cache to prevent the last-level cache from accessing the off-chip memory. The command cancel-based cache production line lock-step concurrent execution method has the advantages of low access and storage delay and high access and storage performance.
Owner:NAT UNIV OF DEFENSE TECH

DDR4 performance balance scheduling structure and method for multiple request sources

The invention relates to the technical field of computer system structures and processor microstructures, in particular to a DDR4 performance balance scheduling structure and method for multiple request sources. A DDR4 performance balance scheduling structure for multiple request sources comprises a plurality of memory access request scheduling buffers used for improving memory access bandwidths corresponding to the memory access request sources; a multi-source continuous arbitration component used for selecting one memory access request to be transmitted; and a DDR4 storage device used forreceiving the memory access request transmitted by the multi-source continuous arbitration component. The invention discloses a DDR4 performance balance scheduling method for multiple request sources.The DDR4 performance balance scheduling method comprises the following steps: L1, setting a memory access request scheduling buffer for a memory access request of each memory access request source; and L2, enabling the multi-source continuous arbitration component to select one memory access request to transmit through an arbitration strategy. A plurality of memory access request scheduling buffers are respectively set for multiple request sources, so that the influence on memory access delay can be reduced while the memory access bandwidth is improved, and the comprehensive memory access performance of a system is improved.
Owner:JIANGNAN INST OF COMPUTING TECH

Memory allocation method based on fine granularity

ActiveCN108920254AImprove deduplication rateAlleviate memory bloatSoftware simulation/interpretation/emulationDistribution methodGranularity
The invention discloses a memory allocation method based on fine granularity. The method is characterized by adopting detection of a virtual machine type, detection of the internal page type of a virtual machine, a differential page allocation strategy of fine granularity and a memory dynamic allocation strategy for access sensing. Since the type of the virtual machine is distinguished, pages usedby an I / O intensive type virtual machine and a computational intensive type virtual machine are all small. Compared with a strategy in which a default option of a system is to allocate large pages for virtual machines, the method has the advantages that memory expansion is relieved, the expense for memory allocation is reduced, and the repeated deletion rate of a memory is decreased; meanwhile, as for an access-intensive type virtual machine, allocated anonymous pages of the virtual machine are large, high memory access performance can be maintained, and since allocated Page Cache pages and kernel pages of the access-intensive type virtual machine are small, compared with the strategy in which the default option of the system is to allocate large pages, the method has the advantages thatthe memory expansion is relieved, the expense for memory allocation is reduced, the repeated deletion rate of the memory is increased, and the loss of system performance is reduced as much as possible.
Owner:UNIV OF SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products