Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

163 results about "Parallel scheduling" patented technology

Resource state information-based grid task scheduling processor and grid task scheduling processing method

The invention discloses a resource state information-based grid task scheduling processor. The resource state information-based grid task scheduling processor comprises a plurality of distributed grid scheduling nodes, wherein each grid scheduling node is connected with the other grid scheduling nodes through an allocation mode; each grid scheduling node has a two-layer structure; and the top layer is provided with a virtual scheduling manager and the bottom layer is provided with a plurality of parallel scheduling executors. The invention also provides a grid task scheduling processing method. In the resource state information-based grid task scheduling processor and the grid task scheduling processing method, a distributed grid resource scheduling system is established; management and coordination on local scheduling executors are unified by a second-level scheduling node management method, so that the failure of a certain scheduling executor is avoided and at the same time, the over-long time waiting of the scheduling task on a certain scheduling executor is avoided; and local resource state feedback is acquired by a resource node property-based analysis and evaluation method, so that the delay caused by acquiring the resource state through a network can be reduced and the grid calculation task scheduling efficiency is further improved.
Owner:THE 28TH RES INST OF CHINA ELECTRONICS TECH GROUP CORP

Decoupling parallel scheduling method for rely tasks in cloud computing

The invention belongs to the field of cloud computing application, and relates to method for task rely relation description, decoupling, parallel scheduling and the like in cloud service. Rely task relations are provided, and a decoupling parallel scheduling method of rely tasks are constructed. The method comprises first decoupling the task rely relations with incoming degree being zero to construct a set of ready tasks and dynamically describing tasks capable of being scheduled parallelly at a moment; then scheduling the set of the ready tasks in distribution type and multi-target mode according to real time resource access so as to effectively improve schedule parallelism; and during the distribution of the tasks, further considering task execution and expenditure of communication (E/C) between the tasks to determine whether task copy is used to replace rely data transmission so as to reduce the expenditure of communication. The whole scheduling method can schedule a plurality of tasks in the set of the ready tasks in dynamic parallel mode, well considers performance indexes including real time performance, parallelism, expenditure of communication, loading balance performance and the like, and effectively improves integral performance of the system through the dynamic scheduling strategy.
Owner:DALIAN UNIV OF TECH

Video decoding macro-block-grade parallel scheduling method for perceiving calculation complexity

The invention discloses a video decoding macro-block-grade parallel scheduling method for perceiving calculation complexity. The method comprises two critical technologies: the first one involves establishing a macro-block decoding complexity prediction linear model according to entropy decoding and macro-block information after reordering such as the number of non-zero coefficients, macro-block interframe predictive coding types, motion vectors and the like, performing complexity analysis on each module, and fully utilizing known macro-block information so as to improve the parallel efficiency; and the second one involves combining macro-block decoding complexity with calculation parallel under the condition that macro-block decoding dependence is satisfied, performing packet parallel execution on macro-blocks according to an ordering result, dynamically determining the packet size according to the calculation capability of a GPU, and dynamically determining the packet number according to the number of macro-blocks which are currently parallel so that the emission frequency of core functions is also controlled while full utilization of the GPU is guaranteed and high-efficiency parallel is realized. Besides, parallel cooperative operation of a CPU and the GPU is realized by use of a buffer area mode, resources are fully utilized, and idle waiting is reduced.
Owner:HUAZHONG UNIV OF SCI & TECH

Dependency mesh based instruction-level parallel scheduling method

The invention discloses a dependency mesh based instruction-level parallel scheduling method. The method includes the steps of 1), acquiring data dependency relations among instructions in a target basic block and information of function units corresponding to the instructions, and setting and computing data dependency priority values of the instructions according to the data dependency relations; 2), partitioning the instructions according to the data dependency priority values and the function units, storing partitioned results according to a mesh form, and establishing dependency meshes obtaining dependency relations between the instructions and data dependency priorities as well as between the instructions and the function units; 3), performing parallelism analysis among the instructions according to the relations, in the dependency meshes obtained in the step 2, between the instructions and the data dependency priorities as well as between the instructions and the function units. The dependency mesh based instruction-level parallel scheduling method has the advantages that the method is capable of describing parallel relations among the instructions and relativity between the instructions and hardware structures by combining the data dependency relations and function unit allocation relations, and is simple in implementation, wide in application range and high in instruction-level parallelism.
Owner:NAT UNIV OF DEFENSE TECH

Heterogeneous computing platform and acceleration method on basis of heterogeneous computing platform

The invention discloses a heterogeneous computing platform. The platform comprises a host and a plurality of programmable devices, wherein the host is connected with each of the programmable devices; the host is used for initializing the programmable devices, carrying out parallel scheduling on each programmable device, sending computing data to each programmable device and obtaining a computing result; and each programmable device is used for processing the distributed computing data in parallel. The plurality of programmable devices of the heterogeneous computing platform can carry out computation at the same time, and the operation speed of the whole heterogeneous computing platform is equal to the sum of operation speeds of the programmable devices; compared with the heterogeneous computing platform with only one programmable device in the prior art, the whole operation speed and degree of parallelism of the heterogeneous computing platform are improved, so that the computing efficiency is improved, and the requirements, for the operation speed of the heterogeneous computing platform, of more and more complicated algorithms and data with larger and larger scales can be better satisfied. The invention furthermore provides an acceleration method on the basis of the heterogeneous computing platform.
Owner:ZHENGZHOU YUNHAI INFORMATION TECH CO LTD

Cloud resource scheduling method and system based on Kubernetes

The invention provides a cloud resource scheduling method and system based on Kubernetes. The method comprises the following steps: creating a Pod according to a resource object creation request, so as to record a type of required schedulers in the resource object creation request into a configuration file of created Pod; after the created Pod is monitored, selecting schedulers from a pre-deployed scheduler group of a required scheduler type, and updating identifiers of the selected schedulers to a configuration file of the Pods, wherein the scheduler group comprises a plurality of schedulers of the same type; after the selected schedulers listen to their own own identifiers, determining a cluster node allocated to the Pod and sending a binding request; after the binding request is received through an access controller, verifying that resources still can meet requirements of information of required resources, and recording the identifiers of the cluster nodes to the Pod; after the corresponding cluster nodes monitor that the Pod records the identifiers of the cluster nodes, starting the Pod, and creating a corresponding container. According to the scheme, more schedulers can schedule cloud resources in parallel, and the resource scheduling efficiency is improved.
Owner:电科云(北京)科技有限公司

Three-dimensional space data parallel scheduling method and system

The invention relates to the technical field of a geospatial information system, and provides a three-dimensional space data parallel scheduling method based on data content. The method comprises the steps of receiving three-dimensional space data scheduling requests of a plurality of clients; resolving a three-dimensional space data scheduling task through a scheduling task parallel resolving method based on the data content, generating a three-dimensional space data scheduling sub-task which can be executed in parallel; distributing a usable task executing thread and a database connection for the three-dimensional space data scheduling sub-task, carrying out parallel inquiry of the three-dimensional space database; recombining the obtained data inquired by the three-dimensional space data scheduling sub-task and then returning to the clients. The efficiency of scheduling different types of three-dimensional space data from the three-dimensional space database is improved, and the real-time visualization ability of a three-dimensional geographic information system (GIS) is also improved by resolving and parallel executing of the three-dimensional space data scheduling task.
Owner:SHENZHEN INST OF ADVANCED TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products