Information processing device

a technology of information processing device and instruction buffer, which is applied in the direction of program control, computation using denominational number representation, instruments, etc., can solve the problems of large processing time, reduced efficiency increase in the hardware (instruction buffer) of information processing device, so as to reduce confusion in pipeline processing and restrict the increase in the hardware of instruction buffer.

Inactive Publication Date: 2006-10-05
TAGO SHIN ICHIRO +7
View PDF8 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

[0017] Therefore, the object of the present invention is to restrict the increase in the hardware for instruction buffers and the like and reduce the confusion in the pipeline processing due to consecutive branching instructions in an information processing device which reads an instruction before the aforementioned instruction is executed by pipeline processing.
[0018] A further object of the present invention is to provide a memory bus access system for an information processing device which reduces the number of superfluous memory bus accesses and makes more efficient instruction fetches possible.
[0022] In addition, when the branching instruction is in the first instruction sequence, since it is necessary to employ at least the first and second instruction buffers which store the first instruction sequence that is being processed and the second instruction sequence of the branch target, the hardware for instruction buffering portions which store the branch target instruction sequence can be reduced.
[0023] In addition, the branch target address information of the next branching instruction inside the first instruction sequence being processed and the branch target address information of the next branching instruction inside the second instruction are stored in the first and the second branch target address information buffer. For this reason, by processing the branching instruction, irrespective of whether the system is in the branching state or in the non-branching state, it is possible to read the branch target instruction sequence immediately by this stored branch target address information, and the confusion in the pipeline processing due to consecutive branching instructions can be reduced.

Problems solved by technology

However, when there is a branching instruction in the instruction sequence, the branch target instruction which has the possibility of being executed directly after this branching instruction becomes an instruction which does not follow in address from that branching instruction and the pipeline processing becomes confused, possibly reducing the efficiency of the information processing device.
This results in a disadvantageous increase in the hardware (instruction buffer) of the information processing device.
In addition, because in the information processing device according to the prior art it was necessary to decode branching instructions so as to generate branch target addresses in order to read branch target instruction sequences of the branching instructions, a large amount of processing time was required after reading the branching instruction until the branch target instruction corresponding to the aforementioned branching instruction was read, so that an instruction buffer for a plurality of instruction sequences could not be employed effectively.
However, when a main memory bus access which makes an instruction fetch from the main memory is performed frequently, the traffic on the memory bus increases.
An increase in traffic on the memory busses causes delays in accessing the memory bus.
In particular, it is undesirable that, in a stage before the branching instruction is executed, it takes a long time to fetch from the main memory instructions which have become necessary as a result of the execution of the branching instruction, due to the fact that target side or sequential side instructions which will probably not be executed are extracted from the main memory.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Information processing device
  • Information processing device
  • Information processing device

Examples

Experimental program
Comparison scheme
Effect test

second embodiment

[0122]FIG. 14 is a timing diagram for the information processing device according to the second embodiment of the present invention. The information processing device shown in FIG. 14 is a microprocessor and it has a chip-mounted CPU40, a cache memory unit 50 and a memory bus access portion 60. To the left of the memory bus access portion 60 is outside the chip and the main memory 64 is connected via the external memory bus 62.

[0123] The CPU 40 comprises an instruction decoder and execution portion 49 which decodes instructions and executes those instructions. The CPU 40 shown in FIG. 14 comprises dual-instruction-fetch-type instruction fetch portions 410, 411 which fetch both branching instruction sequential side and branching instruction target side instructions at the same time. Furthermore, CPU 40 has instruction buffers 470, 471 which store instructions which have been fetched both on the sequential side and on the target side. Instructions selected by the selector 48 from amo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The present invention is defined in that an information processing device which reads, buffers, decodes and executes instructions from an instruction store portion by pipeline processing comprises: an instruction reading request portion which assigns a read address to the instruction store portion; an instruction buffering portion including a plurality of instruction buffers which buffer an instruction sequence read from the instruction store portion; an instruction execution unit which decodes and executes instructions buffered by the instruction buffering portion; a branching instruction detection portion which detects a branching instruction in the instruction sequence read from the instruction store portion; and a branch target address information buffering portion including a plurality of branch target address information buffers which, when the branching instruction detection portion has detected a branching instruction, buffer the branch target address information for generating the branch target address of the branching instruction; wherein, when the branching instruction detection portion has detected a branching instruction, either the branch target address information of the branching instruction is stored in one of the plurality of branch target address information buffers, or the branch target instruction sequence of the branching instruction is stored in one of the plurality of instruction buffers in addition to the storing in the branch target address information buffer.

Description

BACKGROUND OF THE INVENTION [0001] 1. Filed of the Invention [0002] The present invention relates to an information processing device which reads instructions, buffers and executes instructions by pipeline processing, and more particularly, to an information processing device which can reduce pipeline branching confusion when executing instruction sequences comprising branching instructions. [0003] The present invention further relates to a memory bus access system for an information processing device which performs instruction fetching, instruction buffering and instruction decoding and execution by pipeline processing, and more particularly provides an efficient memory bus access system in a dual instruction fetch-type information processing system which performs parallel fetches for branching-generating side instruction sequence (referred to below as target side instruction sequence) and non-branching-generating side sequence (referred to below as sequential side instruction sequ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(United States)
IPC IPC(8): G06F9/00G06F9/38
CPCG06F9/3804G06F9/3846G06F9/3806G06F9/38
Inventor TAGO, SHIN-ICHIROSATO, TAIZOTAKEBE, YOSHIMASAYAMAZAKI, YASUHIROKAMIGATA, TERUHIKOSUGA, ATSUHIROOKANO, HIROSHIYODA, HITOSHI
Owner TAGO SHIN ICHIRO
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products