Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Batch memory scheduling method based on Bank division

A memory scheduling and memory technology, applied in resource allocation, program startup/switching, program control design, etc., can solve problems such as increasing system power consumption, inability to use the principle of locality of memory access requests, and enhanced randomness of memory access requests. Achieve the effect of reducing memory power consumption

Inactive Publication Date: 2018-11-02
BEIJING UNIV OF TECH
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

There is also interference between memory requests from multiple CPU applications. Each core hopes that its own memory access requests can be processed quickly. This competition destroys the characteristics of a single program memory access request, making the memory access request random. Enhanced performance, unable to take advantage of the locality principle of memory access requests, greatly reducing the hit rate of memory access
The reduction of the hit rate not only increases the power consumption of the system, increases the access delay, but also exacerbates the memory wall problem

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Batch memory scheduling method based on Bank division
  • Batch memory scheduling method based on Bank division
  • Batch memory scheduling method based on Bank division

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0025] The present invention will be further described below in conjunction with the accompanying drawings.

[0026] figure 1 A hierarchical diagram of a modern DRAM memory system. It usually includes one or more memory channels (Channel), and multiple different memory channels are executed in parallel, and each channel has an independent address, data, and instruction bus. Each memory channel contains one or more storage arrays (Rank), and all Ranks in the same channel share channel resources. Each memory array contains multiple memory banks (Banks). All Banks in the Rank share the command bus and address bus. The data bus width is the sum of the data bus widths of all the Banks in the Rank. All Banks in the Rank can simultaneously Read and write data in parallel.

[0027] In a DRAM memory system, there are three DRAM operations which are row activation, column access and precharge.

[0028] Row activation, according to the row address, activates the target row in the dat...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a batch memory scheduling method based on Bank division. According to the method, first, according to sources of memory requests, the memory requests are divided into CPU memory requests and GPU memory requests, and the CPU memory requests and the GPU memory requests are synthesized into batch buffer requests respectively; and second, batch memory requests needing to be processed are selected from a CPU batch buffer and a GPU batch buffer, wherein if the batch GPU memory requests are selected, the memory requests to be processed next time are selected by use of the scheduling strategy of first-ready first-come-first-serve (FR-FCFS) of row buffer hitting priorities; and if the batch CPU memory requests are selected, a Bank needs to be divided, so that memory access requests of different kernels are mapped to different Banks, and the memory access requests of multiple CPU applications are isolated. Through the technical scheme, mutual interference of multiple CPUmemory access requests and multiple GPU memory access requests is eliminated, the characteristics of the memory access request of each kernel are reserved to the maximum extent, the hit rate is increased, and therefore the purposes of lowering memory power consumption and improving system performance are achieved.

Description

technical field [0001] The invention belongs to the field of computer system memory system structure, and in particular relates to a high-performance-oriented batch memory scheduling method based on Bank division. Background technique [0002] Heterogeneous multi-core architecture, which integrates multiple CPUs and GPUs on the same chip, has gradually become the mainstream as an advanced architecture. Modern memory systems mainly reduce power consumption and improve performance based on the principle of locality of memory access requests. The CPU and GPU under the heterogeneous multi-core architecture share the on-chip main memory, so memory requests from different cores will compete for shared memory resources and interact with each other. Interference, the locality of memory access requests of individual applications gradually disappears, which seriously affects the overall system performance. The interference of memory requests is mainly divided into two types, the inte...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/50G06F9/48
CPCG06F9/4881G06F9/5016Y02D10/00
Inventor 方娟汪梦萱李凯李宝才
Owner BEIJING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products