Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Shared Data Dynamic Update Method for Data Conflict-Free Programs

A technology for sharing data and dynamically updating, applied in inter-program communication, multi-program device, program control design, etc., can solve problems such as unnatural support for monitoring protocols, complex correctness of directory protocols, and reduced protocol performance, etc., to achieve Efficient automatic dynamic update and invalidation, elimination of invalidation or invalidation messages, and the effect of reducing network and area overhead

Active Publication Date: 2020-08-28
NAT UNIV OF DEFENSE TECH
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The shared information of a specific address is encoded by a complete bit vector directory. The required storage space increases linearly with the number of cores. The storage overhead caused by the number of cores will limit its application, and the coarse-grained shared information scheme contains inaccurate shared information. In essence, performance is sacrificed in exchange for scalability; ②The directory protocol comes from delay and communication overhead, which causes a lot of performance and power consumption problems. The directory protocol requires invalidation messages, reply messages, and indirect cache-to-cache transactions ( Through the intermediate directory); ③ Due to the competition between data access and many transient states, the correctness of the directory protocol is complex and difficult to verify
Although practitioners have proposed many kinds of optimized directory organization structures, these protocols increase the complexity of implementation or cause overhead in performance and power consumption, and require a large number of consistent states
[0005] Monitoring consistency is not limited by directory storage overhead, and the key to the monitoring protocol is to use the bus or other broadcast methods to broadcast protocol transactions in an orderly manner, ensuring that the processor obtains exclusive access to the data item before writing it, so that Compared with the directory protocol, the method of broadcasting messages has the advantage of low latency and high performance, but the listening protocol is essentially an orderly interconnection network to ensure that all cores maintain the same order to achieve memory access request consistency primitives , this ordered broadcast network often causes a lot of overhead, and for scalable systems, the listening protocol may lose its low-latency and high-efficiency advantages
Simultaneous listening protocol compatible interconnection networks mainly consist of bus or crossbar (uses arbitration for ordering) or bufferless ring (guarantees in-order delivery from one ordered node to all nodes), however existing SoC interconnect scales in-order The interconnection of the bus is limited by the bandwidth, there is a delay problem in the ring network, the area overhead of the crossbar will be large, and the grid network is essentially a disordered network, which cannot natively support the monitoring protocol
[0006] The current cache consistency protocol is complex and inefficient, and hardware optimization methods are restricted to varying degrees. In order to meet the definition of consistency, the consistency protocol must immediately respond to write operations, invalidate other core cache backups of shared data, and return the latest The data and directory protocol performs indirect invalidation operations through the directory. This cache-to-cache invalidation operation increases the transaction delay of the protocol, reduces the performance of the protocol, and increases the storage overhead of the directory. For example, the above monitoring protocol is invalidated through orderly network broadcasts request, this type of broadcast method increases the communication overhead of the entire protocol

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Shared Data Dynamic Update Method for Data Conflict-Free Programs
  • A Shared Data Dynamic Update Method for Data Conflict-Free Programs
  • A Shared Data Dynamic Update Method for Data Conflict-Free Programs

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] The present invention will be further described below in conjunction with the accompanying drawings and specific preferred embodiments, but the protection scope of the present invention is not limited thereby.

[0033] Such as Figure 1~4 As shown, the method for dynamically updating shared data oriented to programs with no data conflicts in this embodiment includes: during the execution of parallel programs with no data conflicts, when the CPU executes memory access instructions, it identifies requests for shared data and collects the history of shared data being accessed Information, when at the synchronization point, the cache controller performs dynamic update or invalidation operations on the expired shared data in the local cache according to the collected historical information of the shared data being accessed, among which the dynamic update operation is performed on the shared data that is judged to be the first type , perform an invalidation operation on the s...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a data-conflict-free program-oriented shared-data dynamic-updating method. The method includes: when CPUs execute memory access instructions in a process of executing data-conflict-free parallel programs, identifying shared-data requests, and collecting history information of accessing shared data; and executing a dynamic-updating operation or an invalidation operation on expiring shared data at a synchronization point according to the collected history information of accessing the shared data, wherein the dynamic-updating operation is executed on shared data determinedas a first type, and the invalidation operation is carried out on shared data determined as a second type. The method can automatically realize the dynamic-updating operation and the invalidation operation of the shared data, and has the advantages of a simple realization method, low overheads of a network area and a cache coherence protocol, a high cache hit rate, good performance of the cache coherence protocol and the like.

Description

technical field [0001] The invention relates to the technical field of shared-memory Multiprocessors (Shared-memory Multiprocessors) cache coherence protocol, in particular to a method for dynamically updating shared data oriented to data conflict-free programs. Background technique [0002] Shared-memory multiprocessor is a parallel programming model that provides a single address space to simplify parallel programming. Using a large-capacity, multi-level cache can fully reduce the processor's demand for memory bandwidth and significantly improve processor performance, but this will inevitably cause a shared value to be backed up in multiple caches at the same time. This shared data cache introduces cache consistency. sex issue. The Cache consistency problem is that two or more processors back up data through their own caches. If they are not prevented, they may see different values. The protocol that maintains cache consistency for multiple processors is called a cache co...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F9/54
CPCG06F9/544
Inventor 马胜王志英何锡明陆洪毅沈立陈微刘文杰
Owner NAT UNIV OF DEFENSE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products