Multi-modal massive-data-flow scheduling method under multi-core DSP

A scheduling method and data flow technology, applied in the directions of multi-program device, resource allocation, inter-program communication, etc., can solve problems such as multi-core load balancing multi-modal scheduling without considering massive data flow segmentation

Active Publication Date: 2018-01-19
XIAN MICROELECTRONICS TECH INST
View PDF1 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In the patent CN1608379, a method and equipment for determining the patterns in adjacent data blocks are proposed, and the comparison of adjacent data blocks in the horizontal, vertical, oblique and rotational directions is considered in detail, but the massive data flow is not considered Segmentation and multi-modal scheduling of multi-core load balancing

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal massive-data-flow scheduling method under multi-core DSP
  • Multi-modal massive-data-flow scheduling method under multi-core DSP
  • Multi-modal massive-data-flow scheduling method under multi-core DSP

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0140] The present invention provides a multi-modal scheduling method for massive data streams under multi-core DSP. From the four perspectives of load balancing, distribution granularity, data dimension and processing sequence, the data block scheduling method is planned, and three kinds of data are proposed. Block selection method, two data allocation methods and one data block grouping method, and a flexible combination method and easy-to-use method are designed.

[0141] see Figure 7 , the present invention is a part of the multi-core DSP massive data stream parallel framework, mainly used for data block scheduling of massive data streams, divided into main control core parallel middleware and acceleration core parallel support system, the main control core is responsible for creating a massive data parallel scheduling environment , tasks and data blocks, and complete the scheduling and allocation of tasks and data blocks; the acceleration core is responsible for processi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a multi-modal massive-data-flow scheduling method under a multi-core DSP. The multi-core DSP includes a main control core and an acceleration core. Requests are transmitted between the main control core and the acceleration core through a request packet queue. Three data block selection methods of continuous selection, random selection and spiral selection are determined onthe basis of data dimensions and data priority orders. Two multi-core data block allocation methods of cyclic scheduling and load balancing scheduling are determined according to load balancing. Datablocks selected and determined through a data block grouping method according to allocation granularity are loaded into multiple computing cores for processing. The method adopts multi-level data block scheduling manners, satisfies requirements of system loads, data correlation, processing granularity, the data dimensions and the orders when the data blocks are scheduled, and has good generalityand portability; and expands modes and forms of data block scheduling from multiple levels, and has a wider scope of application. According to the method, a user only needs to configure the data blockscheduling manners and the allocation granularity, a system automatically completes data scheduling, and efficiency of parallel development is improved.

Description

technical field [0001] The invention belongs to the field of multi-core parallel computing, and in particular relates to a multi-mode scheduling method for massive data streams under a multi-core DSP. Background technique [0002] With the wide application of high-performance multi-core DSP processors in weapons and equipment systems, weapons and equipment are gradually developing towards high performance, intelligence, and miniaturization. This requires full use of the parallel computing capabilities of multi-core DSPs. It mainly provides two parallel computing models: the OpenMP model for shared storage and the OpenEM model for distributed storage. [0003] Among them, the data calculation and transmission of the OpenMP model are mainly completed using shared memory, and there is no data flow scheduling problem. In the OpenEM model, data calculation needs to transmit data to local storage, so data flow scheduling needs to be completed. The scheduling method is dynamic loa...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/50G06F9/54
Inventor 江磊刘从新李申
Owner XIAN MICROELECTRONICS TECH INST
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products