Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method, device, and system for controlling cache

A control method and high-speed buffering technology, applied in the computer field, can solve problems such as wasting bandwidth resources, increasing waiting time, and reducing CPU processing efficiency, so as to avoid synchronous work, avoid unnecessary synchronous work, and solve CPU waiting delays. Effect

Active Publication Date: 2014-11-05
四川华鲲振宇智能科技有限责任公司
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Therefore, the cache of the prior art has performed a lot of unnecessary data synchronization work, and these synchronization work will make the ALU or other components of the CPU need to wait for the cache to finish these tasks during program operation, increasing the waiting time of the CPU and reducing the CPU processing efficiency, and waste of bandwidth resources

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method, device, and system for controlling cache
  • Method, device, and system for controlling cache
  • Method, device, and system for controlling cache

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0024] The embodiment of the present invention firstly provides a cache memory control system. In order to make the description of the structure and principle of the system clearer, this embodiment describes the system by applying the system to a central processing unit CPU as an example.

[0025] figure 1 It is a schematic diagram of the application structure of an embodiment of the cache memory control system of the present invention, such as figure 1 As shown, the system of this embodiment may include: an address detection module 11, a cache control module 12;

[0026] Wherein, the address detection module 11 is used to obtain the changed address range of the target object;

[0027] For example, the target object may include stack memory, heap memory, or memory for storing data such as code segments and data segments. The changed address range refers to, for example, for stack memory, the changed address range corresponds to the address range of stack growth or stack roll...

Embodiment 2

[0053] figure 2 It is a schematic diagram of the workflow of another embodiment of the cache memory control system of the present invention. This embodiment is a combination of figure 1 shown in the system structure, and the figure 2 In the system workflow shown, the working principle of the cache memory control system is explained by taking the cache control of stack growth and stack rollback as an example. Wherein, the present embodiment takes the example that there is no memory management unit (Memory Management Unit, MMU) in the CPU, that is, the format of the cache address and the detected memory address is consistent, and no virtual address (program memory address) is required. Address) to physical address (cache address) conversion.

[0054] 201. The address detection module detects the value of the stack pointer, and obtains the initial memory address and the latest memory address of the stack memory;

[0055] Wherein, the address of the stack memory is identified...

Embodiment 3

[0085] image 3 It is a schematic diagram of the workflow of another embodiment of the cache memory control system of the present invention, and this embodiment is the same as figure 2The difference of the illustrated embodiment is that there is an MMU in the applied CPU of the system of the present embodiment, and a conversion from a virtual address (memory address) to a physical address (cache address) is required; As an example, the task mentioned above refers to an application program, and switching refers to switching the program to another stack to run.

[0086] For example, if the value of the cr3 register of intel x86CPU changes, it means that the task is switched.

[0087] In addition, in the specific implementation, the translation lookaside buffer (Translation Lookaside Buffer, TLB for short) in the MMU can also be detected. By sending the virtual address of the stack pointer to the TLB, the TLB can convert the physical address. If the ASID in the hit TLBentry is ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

Provided are a cache control method, device and system. The method includes: acquiring a varying address range of a target object; and determining an address variation type according to the varying address range, if it is memory address allocation or release, then determining a cache operation address range corresponding to the varying address range, and controlling the cache to perform cache operation information within the cache operation address range, the cache operation information prohibiting the cache from performing data synchronization within the cache operation address range. The present invention avoids the cache performing unnecessary synchronization work, which significantly improves the processing efficiency of the CPU and reduces the requirements for the bus and memory bandwidth.

Description

technical field [0001] The invention relates to computer technology, in particular to a cache memory control method, device and system. Background technique [0002] The central processing unit (Central Process Unit, referred to as: CPU) needs to read data from the main memory, that is, the memory, during calculation. However, the access speed of the memory is much slower than the operation speed of the CPU, so that the processing capacity of the CPU cannot be fully utilized. , affecting the efficiency of the entire system. In order to alleviate the conflict between CPU and memory speed mismatch, a high-speed cache memory is usually used between the CPU and memory. The cache can pre-read the data in the memory, and the CPU directly accesses the cache. [0003] Specifically, the memory includes stack memory and heap memory for storing data. Among them, the stack memory is a section of memory used to store temporary data during program running, and the heap memory is a sectio...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F12/08G06F12/0802G06F12/0891
CPCG06F12/0802G06F12/0891
Inventor 蔡安宁
Owner 四川华鲲振宇智能科技有限责任公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products