Instruction cache system and its instruction acquiring method

An instruction caching and caching technology, which is applied in the direction of concurrent instruction execution and machine execution devices, can solve the problems of increasing bus bandwidth pressure, which is not conducive to reducing chip power consumption, etc., so as to reduce system power consumption, increase instruction acquisition rate, and improve retrieval efficiency. the effect of speed

Inactive Publication Date: 2014-07-02
SUN YAT SEN UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This increases the bandwidth pressure of the bus to a certain extent, and is not conducive to reducing chip power consumption.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Instruction cache system and its instruction acquiring method
  • Instruction cache system and its instruction acquiring method
  • Instruction cache system and its instruction acquiring method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0029] see figure 1 , a schematic diagram of the composition of an embodiment of the novel instruction cache system of the present invention, taking a SoC chip as an example. The new instruction cache system includes a microprocessor, a system control coprocessor (CP0), and a memory management unit (MMU). In this embodiment, the microprocessor adopts a single-core structure of the MIPS 4Kc system. MIPS 4Kc and CP0, MIPS 4Kc and MMU, and CP0 and MMU respectively establish connections for access control and processing of instructions. Based on the MIPS 4Kc architecture, CP0 can assist the processor to complete operations such as exception / interrupt handling, cache filling, look-aside translation buffer (TLB) decoding filling, and operation mode conversion. MMU is a control line used to manage virtual memory and physical memory. It establishes a connection with memory (RAM) or other external memory (such as: Flash) through a bus. It is also responsible for mapping virtual addres...

Embodiment 2

[0035] The structure of the instruction cache system in the second embodiment is roughly the same as that in the first embodiment, so we won’t repeat them here. figure 2 , is a schematic structural diagram of another embodiment of the novel instruction cache system of the present invention. Among them, the CP0 establishes a connection with the L1, and the MMU also establishes a connection with the L1, which can manage the access of instructions stored in the L1. The microprocessor can read instructions stored in L1 or RAM through the MMU. L1 is a traditional four-way set-associative cache with 128 blocks per way, and each block is four words in size. L1 has a tag value, and its Tag has 27 bits, including the high 21-bit physical address bits, 4 effective bits, and 1 replacement bit (using the latest filling algorithm, that is, if a cache line is filled, the replacement of the cache line The bit is 1, and the replacement bit of other cache lines in the row is 0) and 1 lock b...

Embodiment 3

[0050] The composition of the new instruction cache system in the third embodiment of the present invention is substantially the same as that in the second embodiment, except that the microprocessor uses SMP technology such as dual-core or quad-core instead of single-core technology. Since each core has its own instruction fetching module and first-level cache independently, the novel instruction cache system composed of L0 and L1 described in the present invention can also improve the instruction fetching speed of each core and reduce system power consumption, thereby Increase execution speed and accomplish more tasks. For the specific working principles and implementation methods of L0 and L1, reference may be made to Embodiment 1 and Embodiment 2, which will not be repeated here.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an instruction cache system and an instruction acquiring method of the system. The instruction cache system comprises a microprocessor, a system control coprocessor, a memory management unit (MMU) connected with an internal memory or external memory via a bus, and a level-0 cache (L0) and a level-1 cache (L1). The L0 includes two memory blocks each provided with a tag value and storing four instructions. The invention adopts L0 to substitute for an instruction acquiring module in a flow line, and employs two memory blocks to alternately operate, so as to maximally improve the instruction acquiring speed and realize the instruction pre-acquiring function. A comparator is adopted to compare the tag values of the memory blocks to realize automatic detection of filled instructions. Four instructions can be transmitted at one time between L0 and L1 or the internal memory, to reduce the access frequency of the MMU, L1 and internal memory, improve the instruction acquiring speed and reduce the system power consumption.

Description

technical field [0001] The invention relates to an instruction buffer in a microprocessor system, in particular to a novel high-efficiency instruction buffer system and an instruction fetching method thereof. Background technique [0002] As we all know, the access speed of the CPU is very fast, while the access speed of the memory is relatively slow. In order to solve the problem of mismatching access speed between CPU and memory, a small-capacity memory with extremely fast access speed, that is, cache (CACHE for short), is usually used, such as the first-level cache, and the data or instructions that may be accessed are pre-stored in the cached. When the CPU needs to read a piece of data, it first looks it up in the cache at a faster speed, and if it finds it, it reads it immediately and sends it to the CPU for processing; if no matching data is found in the cache, it goes down at a relatively slow speed Continue to search in the first-level memory and send the data read...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06F9/30G06F9/38
Inventor 陈弟虎粟涛叶靖文陈俊锐
Owner SUN YAT SEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products