Unlock instant, AI-driven research and patent intelligence for your innovation.

Method and device for filling accelerating input/output processing via superhigh speed buffer memory

A buffer memory, input processing technology, applied in memory systems, electrical digital data processing, instruments, etc., can solve problems such as costing dozens of

Inactive Publication Date: 2004-05-26
INT BUSINESS MASCH CORP
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the processor may still need those invalidated cache lines in order to perform subsequent I / O processing or other user application functions.
Thus, when the processor needs to access data in an invalidated cache line, the processor must fetch the data from system memory, which will cost tens or hundreds of dollars each time a cache line is accessed. processor cycles

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and device for filling accelerating input/output processing via superhigh speed buffer memory
  • Method and device for filling accelerating input/output processing via superhigh speed buffer memory
  • Method and device for filling accelerating input/output processing via superhigh speed buffer memory

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0011] Referring now to the accompanying drawings, specifically to figure 1 , figure 1 A block diagram of a data processing system employing conventional direct memory access (DMA) transfers is shown. As shown, a data processing system 10 includes a central processing unit (CPU) 11 connected to a source memory 12 , a target memory 14 and a DMA controller 15 via a system bus 13 . The DMA controller 15 is connected to the source memory 12 and the target memory 14 through signal lines 16 and 17, respectively. The DMA controller 15 is also connected to the CPU 11 through an acknowledge line 18 and a request line 19 . During a DMA operation, data may be transferred directly from source memory 12 to target memory 14 via system bus 13 without passing through CPU 11 .

[0012] DMA transfer includes the following three main steps. First, CPU 11 sets up a DMA transfer by providing DMA controller 15 with the identification of source memory 12 and target memory 14, the address of the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A method for accelerating input / output operations within a data processing system is disclosed. Initially, a determination is initially made in a cache controller as to whether or not a bus operation is a data transfer from a first memory to a second memory without intervening communications through a processor, such as a direct memory access (DMA) transfer. If the bus operation is such data transfer, a determination is made in a cache memory as to whether or not the cache memory includes a copy of data from the data transfer. If the cache memory does not include a copy of data from the data transfer, a cache line is allocated within the cache memory to store a copy of data from the data transfer.

Description

technical field [0001] The present invention relates generally to input / output operations, and more particularly, to methods and apparatus for accelerating input / output operations. More specifically, the present invention relates to methods and apparatus for accelerating input and output processing via cache injection. Background technique [0002] Generally, a processor controls and coordinates the execution of instructions, typically in a data processing system. To aid in the execution of instructions, the processor must frequently move data into the processor from system memory or peripheral input / output (I / O) devices for processing, and after processing, move data out of the processor to system memory or peripheral input / output devices. As such, the processor must often coordinate the movement of data from one storage device to another. In contrast, a Direct Memory Access (DMA) transfer is a communication that transfers data from one memory device to another over a sy...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F12/08
CPCG06F12/0835
Inventor 帕特里克·J·博勒拉马克里什南·拉贾莫尼哈齐姆·沙菲
Owner INT BUSINESS MASCH CORP