Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Apparatus and method for efficiently updating branch target address cache

A branch target address, branch instruction technology, applied in machine execution device, memory address/allocation/relocation, concurrent instruction execution, etc., can solve problems such as dead knots, dead knots, errors in microprocessing, and achieve increased efficiency and reduced area. Effect

Active Publication Date: 2004-11-03
IP FIRST
View PDF0 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0013] Furthermore, certain combinations of conditions related to BTAC predictability can cause deadlocks within the microprocessor
The combination of BTAC's branch prediction, branch instructions that cross instruction cache boundaries, and the fact that the processor bus trades predictive instruction fetches, can create error conditions that lead to deadlocks in some cases

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Apparatus and method for efficiently updating branch target address cache
  • Apparatus and method for efficiently updating branch target address cache
  • Apparatus and method for efficiently updating branch target address cache

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0079] now refer to figure 1 , shows a block diagram of a microprocessor 100 according to the present invention. The microprocessor 100 includes a pipeline microprocessor.

[0080] The microprocessor 100 includes an instruction fetcher 102 . Instruction fetcher 102 fetches instructions 138 from a memory (eg, system memory) coupled to the microprocessor 100 . In one embodiment, the instruction fetcher 102 fetches instructions from memory in a granularity of cache lines. In one embodiment, the instructions are variable-length instructions. That is, all instructions in the instruction set of the microprocessor 100 have different lengths. In one embodiment, the microprocessor 100 includes a microprocessor whose instruction set is substantially compatible with the variable instruction length x86 architecture instruction set.

[0081] The microprocessor 100 also includes an instruction cache 104 coupled to the instruction fetcher 102 . The instruction cache 104 receives the ca...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A microprocessor with a write queue for a branch target address cache (BTAC) is disclosed. The BTAC is read in parallel with an instruction cache in order to predict a target address of a branch instruction in the accessed cache line. In one embodiment, the BTAC is single-ported; hence, the single port must be shared for reading and writing. When the BTAC needs updating, such as when a branch target address is resolved, the microprocessor stores the branch target address and related information in the write queue. Thus, the write queue potentially enables updating of the BTAC to be delayed until the BTAC is not being read, such as when the instruction cache is idle, a misprediction by the BTAC is being corrected, or a prediction by the BTAC is being overridden. If the write queue becomes full, then it updates the BTAC anyway.

Description

technical field [0001] The present invention relates to branch prediction for microprocessors, and more particularly to branch prediction using a predictive branch target address cache. Background technique [0002] Modern microprocessors are all pipeline microprocessors. That is, several instructions may operate simultaneously in different blocks or pipeline stages of the microprocessor. By John L. Hennessy and Dayid A. Patterson in Computer Architecture: A Quantitative Approach (2nd ed. Morgan Hoffman Publishers (San Francisco, CA), 1996) , defines a pipeline as: "an implementation technique whereby multiple instructions are overlapped in execution" (an implementation technique whereby by multiple instructions areoverlapped in execution). It provides an excellent description of a pipeline: a pipeline is similar to an assembly line. In a vehicle assembly line, there is Many steps, each of which makes some contribution to the assembly of the vehicle. Although for different...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F9/38
Inventor 托马斯C·麦克唐纳
Owner IP FIRST
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products