Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Systems and methods for performing branch prediction in a variable length instruction set microprocessor

a microprocessor and instruction set technology, applied in the direction of coding, memory adressing/allocation/relocation, instruments, etc., can solve the problems of reducing or eliminating the efficiency gain of saving clock cycles, outweighing the flushing effect of pipes, etc., to reduce power consumption, improve performance, and reduce silicon footprint

Inactive Publication Date: 2005-12-15
ARC INT LTD
View PDF80 Cites 30 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

"The patent text describes a microprocessor architecture that reduces power consumption, improves performance, and reduces silicon footprint as compared to existing microprocessors. The architecture uses dynamic branch prediction to improve performance, but sometimes this can lead to incorrect predictions. To address this, the architecture includes a mechanism that discards incorrect predictions before they are injected into the pipeline. The architecture also utilizes zero overhead loops and dynamic branch prediction to improve performance. Overall, the architecture provides better performance and efficiency with reduced latency and improved branch prediction."

Problems solved by technology

Thus, the risk of predicting an incorrect branch resulting in a pipeline flush is outweighed by the benefit of saved clock cycles.
While branch prediction is effective at increasing effective processing speed, problems may arise that reduce or eliminate these efficiency gains when dealing with a variable length microprocessor instruction set.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Systems and methods for performing branch prediction in a variable length instruction set microprocessor
  • Systems and methods for performing branch prediction in a variable length instruction set microprocessor
  • Systems and methods for performing branch prediction in a variable length instruction set microprocessor

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0025] The following description is intended to convey a thorough understanding of the invention by providing specific embodiments and details involving various aspects of a new and useful microprocessor architecture. It is understood, however, that the invention is not limited to these specific embodiments and details, which are exemplary only. It further is understood that one possessing ordinary skill in the art, in light of known systems and methods, would appreciate the use of the invention for its intended purposes and benefits in any number of alternative embodiments, depending upon specific design and other needs.

[0026]FIG. 1 is a diagram illustrating the contents of a 32-bit instruction memory and a corresponding table illustrating the location of particular instructions within the instruction memory in connection with a technique for selectively ignoring branch prediction information in accordance with at least one exemplary embodiment of this invention. When branch predi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A method of performing branch prediction in a microprocessor using variable length instructions is provided. An instruction is fetched from memory based on a specified fetch address and a branch prediction is made based on the address. The prediction is selectively discarded if the look-up was based on a non-sequential fetch to an unaligned instruction address and a branch target alignment cache (BTAC) bit of the instruction is equal to zero. In order to remove the inherent latency of branch prediction, an instruction prior to a branch instruction may be fetched concurrently with a branch prediction unit look-up table entry containing prediction information for a next instruction word. Then, the branch instruction is fetched and a prediction is made on this branch instruction based on information fetched in the previous cycle. The predicted target instruction is fetched on the next clock cycle. If zero overhead loops are used, a look-up table of a branch prediction unit is updated whenever the zero-overhead loop mechanism is updated. A last fetch address of a last instruction of a loop body of a zero overhead loop in the branch prediction look-up table is stored. Then, whenever an instruction fetch hits the end of a loop body, predictively re-directing an instruction fetch to the start of the loop body. The last fetch address of the loop body is derived from the address of the first instruction after the end of the loop.

Description

CROSS REFERENCE TO RELATED APPLICATION(S) [0001] This application claims priority to provisional application No. 60 / 572,238 filed May 19, 2004, entitled “Microprocessor Architecture,” hereby incorporated by reference in its entirety.FIELD OF THE INVENTION [0002] This invention relates generally to microprocessor architecture and more specifically to an improved architecture and mode of operation of a microprocessor for performing branch prediction. BACKGROUND OF THE INVENTION [0003] A typical component of a multistage microprocessor pipeline is the branch prediction unit (BPU). Usually located in or near a fetch stage of the pipelines the branch prediction unit increases effective processing speed by predicting whether a branch to a non-sequential instruction will be taken based upon past instruction processing history. The branch prediction unit contains a branch look-up or prediction table that stores the address of branch instructions, an indication as to whether the branch was t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(United States)
IPC IPC(8): G06F9/00G06F9/30G06F9/318G06F9/38G06F12/00G06F12/08G06F15/00G06F15/76G06F15/78H03M13/00
CPCG06F5/01G06F9/3861G06F9/30036G06F9/30149G06F9/30181G06F9/325G06F9/3802G06F9/3806G06F9/3816G06F9/3844G06F9/3846G06F9/3885G06F9/3897G06F11/3648G06F12/0802G06F15/7867Y02B60/1225G06F9/32Y02B60/1207G06F9/30032G06F9/30145Y02D10/00
Inventor WONG, KAR-LIKHAKEWILL, JAMESTOPHAM, NIGELFUHLER, RICH
Owner ARC INT LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products