Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A shared resource scheduling method and system for distributed parallel processing

A technology of shared resources and parallel processing, applied in the direction of multi-programming device, program startup/switching, etc., can solve the problem of multi-processing node access competition, and achieve the effect of solving the access competition problem, improving processing efficiency, and avoiding head-of-line blocking.

Inactive Publication Date: 2011-12-28
EAST CHINA NORMAL UNIV
View PDF8 Cites 57 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, in a distributed parallel processing environment, each processor subsystem (that is, each processing node of the system) adopts its own independent operating system and model structure. At any time, shared resources may face access competition among multiple processing nodes. problem, because the initiators of access events to shared resources do not have a unified management model to restrict their access

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A shared resource scheduling method and system for distributed parallel processing
  • A shared resource scheduling method and system for distributed parallel processing
  • A shared resource scheduling method and system for distributed parallel processing

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0020] The specific implementation method of the present invention will be described in detail with reference to the accompanying drawings.

[0021] The shared resource scheduling method of the distributed parallel system proposed by the present invention is a distributed processing mechanism, such as figure 1 shown. This mechanism is realized by a shared resource scheduling unit distributed in each processing node, a resource lock distributed in each shared resource, and an access request arbitration unit. These distributed processing units can be software modules, or hardware modules with the same function, or hardware modules with firmware, and they send messages to each other through the switching unit (there are two types, resource access request signal and resource access permission signal) to communicate.

[0022] Each processing node of the distributed parallel system, that is, each processor subsystem runs its own operating system and application program. In order ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a shared resource scheduling method and system used in distributed parallel processing. The method and system are based on a distributed operation mechanism. The shared resource scheduling units distributed in each processor subsystem are distributed in each shared Resource locks and resource request arbitration units are implemented. These distributed processing units communicate by sending messages (resource access requests / permissions) to each other through the switching unit. The shared resource scheduling unit in the processor subsystem uses virtual queue technology to manage all resource access requests in the data cache, that is, a special queue is specially opened for each accessible shared resource. Resource locks in shared resources are used to ensure the uniqueness of access to shared resources at any time. Resource locks have two states: lock occupation and lock release. The request arbitration unit in the shared resource uses a priority-based fair polling algorithm to arbitrate resource access requests from different processing nodes. The invention can effectively avoid the competition problem when each processing node accesses the shared resource, can also avoid the deadlock of the shared resource and the starvation problem of the processing node, and provides high-efficiency mutually exclusive access to the shared resource.

Description

technical field [0001] The invention belongs to the technical field of distributed information processing and parallel computing, and in particular relates to a shared resource scheduling method and system for distributed parallel processing. Background technique [0002] In many information devices or systems, when the amount of information processing in data acquisition, processing, calculation, display, output control, and communication processes is large, and there are certain speed requirements, distributed dual-processor or multi-processor technology is often used. , to process the information in parallel. In this kind of distributed parallel system, each processor subsystem often needs to share many common resources, for example, realize information exchange through time-sharing access to common memory, and may also share the same I / O interface or network port for data input output. [0003] Distributed parallel processing systems often have resource competition, wh...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/48
Inventor 胡星波晏渭川胡津翔梁虹
Owner EAST CHINA NORMAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products