Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Task scheduling system of on-chip multi-core computing platform and method for task parallelization

A computing platform and task scheduling technology, applied in a variety of digital computer combinations, multi-program devices, etc., can solve problems such as limiting the performance of the platform, unable to achieve automatic parallel execution of tasks, etc., to achieve extended parallelism and throughput. Effect

Active Publication Date: 2011-07-20
SUZHOU INST FOR ADVANCED STUDY USTC
View PDF3 Cites 31 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Among them, OpenMP provides a general thread-level programming model, which mainly relies on the mechanism of mutex to realize the synchronization between tasks between threads, but because the mutex is controlled by the programmer, it cannot realize the automatic parallelization of tasks implement
Other programming models such as MPI also require programmers to manually divide tasks, and even need to explicitly schedule tasks in parallel, so that the acceleration effect and performance improvement that tasks can achieve are greatly restricted by the programmers themselves.
[0005] In general, the task division and scheduling method in the task parallel scheduling method in the current parallel programming model requires manual intervention and configuration by the programmer, which limits the performance optimization effect that the platform can obtain

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Task scheduling system of on-chip multi-core computing platform and method for task parallelization
  • Task scheduling system of on-chip multi-core computing platform and method for task parallelization
  • Task scheduling system of on-chip multi-core computing platform and method for task parallelization

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0038] Such as figure 1 with figure 2 As shown, the task scheduling system of the on-chip multi-core computing platform includes a user service module that provides tasks that need to be executed, and a computing service module that executes multiple tasks on the on-chip multi-core computing platform. A core is set between the user service module and the computing service module. The scheduling service module, the core scheduling service module accepts the task request of the user service module as input, judges the data dependencies between different tasks through records, and schedules the task requests to different computing service modules in parallel for execution.

[0039] figure 1 It shows the system architecture diagram of the task scheduling system of the on-chip multi-core computing platform. The modules include a task queue, a variable state table, a group of reserved stations and a ROB table. The specific modules are as follows:

[0040] 1) task queue

[0041] ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a task scheduling system of an on-chip multi-core computing platform and a method for task parallelization, wherein the system comprises user service modules for providing tasks which are needed to be executed, and computation service modules for executing a plurality of tasks on the on-chip multi-core computing platform, and the system is characterized in that core scheduling service modules are arranged between the user service modules and the computation service modules, the core scheduling service modules receive task requests of the user service modules as input, judge the data dependency relations among different tasks through records, and schedule the task requests in parallel to different computation service modules for being executed. The system enhances platform throughput and system performance by performing correlation monitoring and automatic parallelization on the tasks during running.

Description

technical field [0001] The invention belongs to the technical field of scheduling of multi-core computing platforms on a chip, and in particular relates to a task scheduling system of a multi-core computing platform on a chip and a task parallelization method. Background technique [0002] As the complexity of very large scale integration (VLSI) increases rapidly according to Moore's law, the performance improvement of a single processor has reached the limit, and multi-core processors have become the inevitable direction of the development of microprocessor architecture. Especially for single-chip heterogeneous multi-core systems, it integrates heterogeneous processing units such as general-purpose processors, DSPs, ASIPs, and even mixed-signal circuits on the same chip, giving full play to the respective advantages of heterogeneous processing units, and can meet the needs of embedded systems. The requirements for real-time performance and power consumption have become a re...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F9/46G06F15/16
Inventor 周学海王超张军能冯晓静李曦陈香兰
Owner SUZHOU INST FOR ADVANCED STUDY USTC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products