Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Implementation method for many-core simplified Cache protocol without transverse consistency

A protocol implementation and consistency technology, applied in the field of high-performance computing, can solve problems such as false sharing of shared main memory Cache structure, achieve the effects of reducing hardware overhead, improving write-back efficiency, and solving false sharing problems

Pending Publication Date: 2022-03-22
JIANGNAN INST OF COMPUTING TECH
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The purpose of the present invention is to provide a method for implementing the many-core streamlined Cache protocol without horizontal consistency, so as to overcome the false sharing problem of the shared main memory Cache structure in many-core processors

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Implementation method for many-core simplified Cache protocol without transverse consistency

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0020] Embodiment: The present invention provides a method for implementing the many-core streamlined Cache protocol without horizontal consistency, specifically comprising the following steps:

[0021] S1. Obtain the Cache row status bit information of the hardware Cache, analyze the data update situation in the Cache row, and mark the updated data;

[0022] S2. If all the data in the Cache line is not updated, or all the data in the Cache line is updated, jump to S5, if only part of the data in the Cache line is updated, jump to S3;

[0023] S3. When only part of the data in a Cache row needs to be written back, determine the unit size and the number of data of the part of the data, set the bit mask corresponding to the part of the data to 1, and set the other bit masks to 0;

[0024] S4. According to the mask granularity and setting situation, the data corresponding to the mask bit in the main memory is updated as 1, as follows:

[0025] S4.1 According to the physical addr...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for realizing a many-core simplified Cache protocol without transverse consistency, which comprises the following steps of: S1, analyzing the updating condition of data in a Cache line, and marking updated data; s2, if all the data in the Cache line are not updated or all the data in the Cache line are updated, skipping to S5, and if only part of the data in the Cache line are updated, skipping to S3; s3, when only partial content of the data in one Cache line needs to be written back, setting other bit masks to be 0; s4, updating the data with the corresponding mask bit being 1 in the main memory according to the size of the mask granularity and the setting condition; and S5, directly carrying out write-back operation on the Cache line. According to the method, the false sharing problem of a shared main memory Cache structure is effectively solved, the write-back efficiency can be improved, and the hardware overhead of a processor in the aspect of Cache data management is effectively reduced.

Description

technical field [0001] The invention relates to a method for implementing a many-core streamlined Cache protocol without horizontal consistency, and belongs to the technical field of high-performance computing. Background technique [0002] In order to alleviate the gap between the speed of data access from the main computer and the data processing speed of the processor in the computer system, one or more levels of cache memory (Cache) is added between the processor and the main memory. The basic unit of transferring data between caches, a Cache row contains multiple data units. When a Cache row data is copied into the Cache from the main memory, the storage control unit will create an entry for the Cache row data, and this entry includes both the memory data and the location information of the row data in the memory. [0003] In the case of a shared main memory architecture and each processor core contains an independent Cache structure, the computing tasks in each proces...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F8/41
CPCG06F8/443G06F8/441
Inventor 何王全郑方王飞过锋吴伟陈芳园朱琪钱宏管茂林
Owner JIANGNAN INST OF COMPUTING TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products