Check patentability & draft patents in minutes with Patsnap Eureka AI!

Adaptive Processing for Data Sharing with Lock Omission and Lock Selection

A technology for omitting hardware locks and processing circuits, which is applied in the fields of electrical digital data processing, concurrent instruction execution, machine execution devices, etc.

Active Publication Date: 2018-11-23
INT BUSINESS MASCH CORP
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Transactions execute optimistically without acquiring locks, however, transactional execution may need to be aborted and retried if an operation of an executing transaction on a memory location conflicts with another operation on the same memory location

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Adaptive Processing for Data Sharing with Lock Omission and Lock Selection
  • Adaptive Processing for Data Sharing with Lock Omission and Lock Selection
  • Adaptive Processing for Data Sharing with Lock Omission and Lock Selection

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0018] Historically, computer systems or processors have only had a single processor (aka processing unit or central processing unit). The processor includes an instruction processing unit (IPU), a branch unit, and a memory control unit, among others. Such processors are capable of executing a single thread of a program at a time. Operating systems have been developed that can time-share servers by allocating a program to execute on a processor for one period of time, and then assigning another program to execute on the processor for another period of time. As technology evolves, memory subsystem caches are often added to processors and complex dynamic address translations including translation lookaside buffers (TLBs). The IPU itself is often referred to as the processor. As technology continues to develop, entire processors can be packaged as a single semiconductor chip or die, and such processors are called microprocessors. Then, processors were developed that added mult...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

In a Hardware Lock Elision (HLE) Environment, predictively determining whether a HLE transaction should actually acquire a lock and execute non-transactionally, is provided. Included is, based on encountering an HLE lock-acquire instruction, determining, based on an HLE predictor, whether to elide the lock and proceed as an HLE transaction or to acquire the lock and proceed as a non-transaction; based on the HLE predictor predicting to elide, setting the address of the lock as a read-set of the transaction, and suppressing any write by the lock- acquire instruction to the lock and proceeding in HLE transactional execution mode until an xrelease instruction is encountered wherein the xrelease instruction releases the lock or the HLE transaction encounters a transactional conflict; and based on the HLE predictor predicting not- to-elide, treating the HLE lock-acquire instruction as a non-HLE lock-acquire instruction, and proceeding in non-transactional mode.

Description

technical field [0001] The present disclosure relates generally to transactional memory systems, and more particularly to methods, computer programs, and computer systems for adaptively sharing data by utilizing lock omission and selection of locks. Background technique [0002] The number of central processing unit (CPU) cores on a chip and the number of CPU cores connected to shared memory continues to grow significantly to support increasing workload capacity requirements. The ever-increasing number of CPUs cooperating to handle the same workload places a significant burden on software scalability; for example, shared queues or data structures protected by traditional semaphores become hot spots and result in sub-linear n-way scaling curves. Traditionally, this has been dealt with by implementing finer-grained locking in software. Implementing finer-grained locking to improve software scalability can be very complex and error-prone, and at today's CPU frequencies, the la...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F9/38
CPCG06F9/30087G06F9/526
Inventor M·K·克施温德M·M·迈克尔V·萨拉普拉岑中龙
Owner INT BUSINESS MASCH CORP
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More