Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Achieving Low Grace Period Latencies Despite Energy Efficiency

a grace period and energy efficiency technology, applied in the field of computer systems and methods, can solve the problems of increasing grace period latencies, burdensome read-side lock acquisition, and sleeping with callbacks, and cannot take full advantage of subsequent grace periods

Inactive Publication Date: 2015-06-04
IBM CORP
View PDF7 Cites 21 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

This patent provides a method, system, and computer program for optimizing the performance of processors with Read-Copy Update (RCU) callbacks in an energy-efficient environment. The method involves assigning different Grace Period numbers to different groups of the processor's RCU callbacks, starting new grace periods periodically, and ending old grace periods also periodically. Groups of RCU callbacks are maintained on sublists and advanced when a corresponding grace period number becomes available. The method also includes recording future grace periods needed by the processor so that they can be initiated without waking the processor if it is in low power state. The technical effects of this patent include improving the latency of processors with RCU callbacks in a low-power environment, optimizing the performance of processor groups with RCU callbacks, and offloading callback invocation from specially designated processors.

Problems solved by technology

By way of example, a network routing table that is updated at most once every few minutes but searched many thousands of times per second is a case where read-side lock acquisition would be quite burdensome.
Unfortunately, the RCU_FAST_NO_HZ option can also result in greatly increased grace period latencies.
This is due to the fact that the processors which are sleeping with callbacks cannot take full advantage of subsequent grace periods.
So even if several grace periods elapse while the processor was sleeping, the processor will take advantage of only one, thus potentially delaying its callbacks for another sleep period.
Another scenario causing increased grace period latency for a sleeping processor (in a RCU_FAST_NO_HZ kernel) is when no other processor in the system needs a grace period to start.
In that case, the start of the next grace period will be delayed until the sleeping processor awakens, further degrading grace period latency for another sleep period.
This state machine work often has no effect and consumes processor time, and thus energy.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Achieving Low Grace Period Latencies Despite Energy Efficiency
  • Achieving Low Grace Period Latencies Despite Energy Efficiency
  • Achieving Low Grace Period Latencies Despite Energy Efficiency

Examples

Experimental program
Comparison scheme
Effect test

example embodiments

[0047]Turning now to the figures, wherein like reference numerals represent like elements in all of the several views, FIG. 4 illustrates an example multiprocessor computer system in which the grace period processing technique described herein may be implemented. In FIG. 4, a computer system 2 includes multiple processors 41, 42 . . . 4n, a system bus 6, and a program memory 8. There are also cache memories 101, 102 . . . 10n and cache controllers 121, 122 . . . 12n respectively associated with the processors 41, 42 . . . 4n. A conventional memory controller 14 is associated with the memory 8. As shown, the memory controller 14 may reside separately from processors 42 . . . 4n (e.g., as part of a chipset). As discussed below, it could also comprise plural memory controller instances residing on the processors 41, 42 . . . 4n.

[0048]The computer system 2 may represent any of several different types of computing apparatus. Such computing apparatus may include, but are not limited to, g...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A technique for achieving low grace-period latencies in an energy efficient environment in which processors with Read-Copy Update (RCU) callbacks are allowed to enter low power states. In an example embodiment, for each processor that has RCU callbacks, different grace period numbers are assigned to different groups of the processor's RCU callbacks. New grace periods are periodically started and old grace periods are periodically ended. As old grace periods end, groups of RCU callbacks having corresponding assigned grace period numbers are invoked.

Description

BACKGROUND[0001]1. Field[0002]The present disclosure relates to computer systems and methods in which data resources are shared among data consumers while preserving data integrity and consistency relative to each consumer. More particularly, the disclosure concerns a mutual exclusion mechanism known as “read-copy update.”[0003]2. Description of the Prior Art[0004]By way of background, read-copy update (also known as “RCU”) is a mutual exclusion technique that permits shared data to be accessed for reading without the use of locks, writes to shared memory, memory barriers, atomic instructions, or other computationally expensive synchronization mechanisms, while still permitting the data to be updated (modify, delete, insert, etc.) concurrently. The technique is well suited to both uniprocessor and multiprocessor computing environments wherein the number of read operations (readers) accessing a shared data set is large in comparison to the number of update operations (updaters), and ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(United States)
IPC IPC(8): G06F1/32
CPCG06F1/3293G06F9/5094Y02D10/00G06F1/3203
Inventor MCKENNEY, PAUL E.
Owner IBM CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products