Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Data stream prefetching method based on access instruction

A data flow and data technology, applied in the direction of concurrent instruction execution, machine execution device, etc., can solve problems such as hindering the execution of memory access instructions, excessive occupation, affecting the performance of processor memory access, etc., to avoid instruction reordering and buffering, improve The effect of improving hit rate and test performance

Active Publication Date: 2008-05-28
上海高性能集成电路设计中心
View PDF0 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Since these resources are also necessary for other memory access instructions, if the data stream prefetch instruction is used improperly, even if the effect of improving the hit rate of the first-level data cache memory of the normal memory access instruction is achieved, it will be due to excessive occupation of memory access instructions. The required hardware resources make these resources a new bottleneck that hinders the execution of memory access instructions, thereby affecting processor memory access performance

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Data stream prefetching method based on access instruction
  • Data stream prefetching method based on access instruction
  • Data stream prefetching method based on access instruction

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0021] Such as figure 2 As shown, while the data flow prefetch instruction A enters the issue queue 101, it also enters the instruction reordering buffer 108, waiting for the instruction to exit. After the instruction A is issued, the physical address required for memory access is generated after being processed by the address calculation unit 102 and the address substitution unit 103 . If a precise breakpoint fault or exception occurs during this process, the data stream prefetch instruction will unconditionally give up the memory access operation, and immediately notify the instruction reordering buffer 108 that the instruction can exit, but no error is reported.

[0022] Utilize this physical address, inquire about the data in the first-level data cache memory 109 by query unit 104, if hit the data in the first-level data cache memory 109 and this memory line state meets the requirement of data stream prefetch request, then end the prefetch operation, Instruction reorder ...

Embodiment 2

[0024] Such as image 3 As shown, when the data flow prefetch instruction A enters the issue queue, it exits immediately without entering the instruction reordering buffer 108, and the age number assigned to this instruction is the same as that of the previous instruction. After the instruction A is issued, the physical address required for memory access is generated through the address calculation unit 102 and the address substitution unit 103 . If a precise breakpoint fault or exception occurs during this process, the data stream prefetch instruction will unconditionally give up the memory access operation, but no error will be reported.

[0025] Use the physical address to query the primary data cache memory 109 through the query unit 104. If the primary data cache memory 109 is hit and the state of the memory line meets the requirements of the data stream prefetch request, the prefetch operation ends. Otherwise, apply for an entry to the miss address buffer 107, and if th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a data flow pre-fetched method which is based on memory reference instruction, comprising a first-level data high speed buffering storage unit and a main memory, and a data flow pre-fetched instruction which is based on RISC processor instruction visiting and storing, wherein the data in the first-level data high speed buffering storage unit is a subset of the data in the main memory, the data flow pre-fetched instruction withdraws immediately whether to hit the first-level data high speed buffering storage unit, and does not enter into the data flow loading queue which stores reading-type memory reference instruction, and does not enter into the data flow storing queue which stores writing-type memory reference instruction, and occupies a non-hit address buffering unit when the data flow pre-fetched instruction does not hit the first-level data high speed buffering storage unit or the state of the high speed buffering storage unit does not agree with requirements, and the data flow pre-fetched instruction is used for continuing the memory reference operation. The invention can not only improve the hit rate of the first-level data high speed buffering storage unit of normal memory reference instruction, but also reduces the situation that the data flow pre-fetched instruction and the normal memory reference instruction struggle for the hardware resource, moreover, the design complex degree and the cost of the hardware are not increased evidently.

Description

technical field [0001] The invention relates to a method for improving the hit rate of the first-level data cache memory of a microprocessor through data stream prefetching, in particular to a method for realizing data stream prefetching through memory access instructions. Background technique [0002] The hit rate of the first-level data cache memory is crucial to the memory access performance of the microprocessor, and data prefetching is an effective means to improve the hit rate of the first-level data cache memory. Data prefetching can be realized by software control or hardware control. Among them, software-controlled data stream prefetching is realized by executing a special memory access instruction - "data stream prefetch instruction", which does not require hardware control logic and related hardware-controlled data stream prefetching. Buffer saves hardware overhead and reduces design complexity, but it also has obvious shortcomings. [0003] Data stream prefetch...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F9/38
Inventor 王飙杨剑新
Owner 上海高性能集成电路设计中心
Features
  • Generate Ideas
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More