Unlock instant, AI-driven research and patent intelligence for your innovation.

A method of returning fetched data in cache in advance

A data and memory access technology, which is applied in the field of early return to memory access data, can solve problems such as large memory access delays, and achieve the effects of simple operation, improved memory access speed, and simple principle

Active Publication Date: 2015-08-26
NAT UNIV OF DEFENSE TECH
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In the cache with a short pipeline, the memory access request that does not hit has to go through the pipeline at least three times, resulting in a large delay in the memory access that does not hit the cache.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A method of returning fetched data in cache in advance
  • A method of returning fetched data in cache in advance
  • A method of returning fetched data in cache in advance

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0020] The present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0021] like figure 2 As shown, when the cache misses, the flow process of the method for returning the access data in advance in the cache of the present invention is:

[0022] 1. Send a memory access request in the core;

[0023] 2. Go through the cache pipeline for the first time and find that the cache misses, and then send a read request to the next-level cache or storage control;

[0024] 3. The next-level cache and storage control perform read operations and return data to the cache;

[0025] 4. Return the data to the core;

[0026] 5. Fill the cache with the data returned by the response.

[0027] That is, when the memory access instruction flows through the first pass of the pipeline, the address and control information of the response are recorded. When the data is returned from the storage control (MCU) or the next-leve...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a method for returning access data in advance in a cache. The method comprises the following processes of: (1) sending an access request by a core; (2) if finding out that the cache is not accurate by traversing a cache assembly line for a first time, sending a read request to a next-level cache or access controller; (3) executing read operation by using the next-level cache or access controller, so as to return the data to the cache; (4) returning the data to the core; and (5) filling the data responding a returning order to the cache. With the adoption of the method provided by the invention, the access speed can be greatly improved, and the hardware cost is reduced.

Description

technical field [0001] The invention mainly relates to the field of cache pipeline design in multi-core microprocessors, in particular to a method for returning accessing data in cache in advance. Background technique [0002] In modern microprocessor design, its storage system often uses cache to reduce memory access delay. The memory access instructions processed by the cache mainly include load and store instructions, and the execution of the processor is more sensitive to the delay of the load instruction. If the load hits in the cache, the data will be returned quickly, and if it does not hit, there will be a longer delay. In the design of high-efficiency microprocessors, in order to return the memory access instruction on the mission earlier, a shorter pipeline is often used. The instruction that does not hit needs to pass through the pipeline multiple times to complete, and multiple executions will lead to the execution time of the memory access instruction that does...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F12/08G06F12/0802
Inventor 衣晓飞邓让钰晏小波李永进周宏伟张英窦强曾坤谢伦国
Owner NAT UNIV OF DEFENSE TECH