Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

memory allocation

A technology of memory allocation and memory, which is applied in the direction of memory architecture access/allocation, resource allocation, memory system, etc., and can solve problems that affect the performance of the processing system, conflicts, etc.

Active Publication Date: 2022-08-05
IMAGINATION TECH LTD
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In this way, the processing system can simultaneously access different banks within the same memory (for example, read a register value from row 0 in bank 0 of memory, and read a register value from row 2 in bank 1 of memory), but Whenever simultaneous attempts are made to access the same group, a conflict occurs and one of the accesses must stop
This affects the performance of the processing system

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • memory allocation
  • memory allocation
  • memory allocation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] The following description is presented by way of example to enable any person skilled in the art to make and use the invention. The present invention is not limited to the embodiments described herein, and various modifications to the disclosed embodiments will be apparent to those skilled in the art.

[0039] Embodiments will now be described by way of example only.

[0040] As described above, a processing system (eg, a system including a CPU or GPU and memory) may include multiple banks within the memory. An executed instruction (eg, a read or write instruction) typically does not refer to any particular group, but only to a register number, eg, read r0, where r0 refers to register 0. In known processing systems, an address generation unit maps register numbers to in-memory banks based on a defined formula (or relationship), such as:

[0041] (group number) = (register number) mod (number of groups)

[0042] (Equation 1)

[0043] And the address decoding logic with...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

Describes memory allocation methods. The first example method maps registers referenced by different groups of instances of the same task to separate logical memory. Other example methods described herein describe mapping of registers referenced by tasks to different banks within a single logical memory, and in various examples, this mapping may take into account which bank may become the primary bank for a particular task and one or more Assignment of multiple other tasks.

Description

technical field [0001] This application relates to, but is not limited to, memory allocation in processing systems. Background technique [0002] In a processing system, when a task is created, a portion of the memory is allocated to the task. Next, the address generation unit maps the registers referenced within the task to actual memory addresses within the allocated memory portion. Two tasks can be assigned a memory address within the same memory. Conflicts can occur when multiple access requests are made to memory at the same time. For example, two tasks can each request a value from memory, or a single task can request two values ​​from memory. This causes one access to have to stop until another access is complete. [0003] To increase read / write throughput (by reducing the occurrence of stalls), memory can be arranged into separate groups, and in any cycle, data can be read from each group. In this way, the processing system can simultaneously access different ba...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F9/50
CPCG06F9/5016G06F12/0284G06F2212/1024G06F12/0223G06F2209/507G06F9/30123G06F9/3851G06F9/345G06F9/3885G06F3/0604G06F3/0659G06F3/0673G06F9/30101G06F9/324
Inventor 伊苏鲁·黑拉特R·布罗德赫斯特
Owner IMAGINATION TECH LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products