Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

38 results about "Pipeline scheduling" patented technology

Data flow compilation optimization method oriented to multi-core cluster

ActiveCN103970580AImplementing a three-level optimization processImprove execution performanceResource allocationMemory systemsCache optimizationData stream
The invention discloses a data flow compilation optimization method oriented to a multi-core cluster system. The data flow compilation optimization method comprises the following steps that task partitioning and scheduling of mapping from calculation tasks to processing cores are determined; according to task partitioning and scheduling results, hierarchical pipeline scheduling of pipeline scheduling tables among cluster nodes and among cluster node inner cores is constructed; according to structural characteristics of a multi-core processor, communication situations among the cluster nodes, and execution situations of a data flow program on the multi-core processor, cache optimization based on cache is conducted. According to the method, the data flow program and optimization techniques related to the structure of the system are combined, high-load equilibrium and high parallelism of synchronous and asynchronous mixed pipelining codes on a multi-core cluster are brought into full play, and according to cache and communication modes of the multi-core cluster, cache access and communication transmission of the program are optimized; furthermore, the execution performance of the program is improved, and execution time is shorter.
Owner:HUAZHONG UNIV OF SCI & TECH

Flow compilation optimization method oriented to chip multi-core processor

The invention discloses a flow compilation optimization method oriented to a chip multi-core processor. The method includes a software pipeline scheduling step, a storage access optimization step and a communication optimization step, the software pipeline scheduling step refers to generating a software pipeline scheduling table, the storage access optimization step refers to caching and distributing data required by a computing task on an on-chip scratch pad memory (SPM) and a main memory of the chip multi-core processor according to the software pipeline scheduling table, and as for the communication optimization step, a mapping mode with a lowest communication traffic is determined according to an on-chip network topology of the chip multi-core processor, and thereby each virtual processing core in the software pipeline scheduling table is scheduled and mapped to an actual physical core according to the mapping mode. According to the method, the method is combined with an optimization technology, according to the optimization technology, a flow program is relevant to a system structure, a high load balance and a high parallelism of software pipeline codes on the multi-core processor are fully developed, the storage access and communication transmission of the program are optimized specific to hierarchy storage and communication mode on the chip multi-core processor, the execution performance of the program is further improved, and the execution time is short.
Owner:HUAZHONG UNIV OF SCI & TECH

Improved unified particle swarm algorithm-based mechanical part machining pipeline scheduling method

InactiveCN110471274AOptimal Scheduling SchemeEasy to operateAdaptive controlLearning basedLocal optimum
The invention discloses a mechanical part machining pipeline scheduling method of employing a dynamic neighborhood and comprehensive learning-based discrete unified particle swarm algorithm. The mechanical part machining pipeline scheduling method comprises the following steps of reading operation time of mechanical part machining; carrying out population initialization; calculating a fitness value of each particle and sorting the particles; updating an optimal position, an individual optimal position, a global optimal position and a learning item in each particle neighborhood; carrying out global searching by adopting dynamic neighborhood and comprehensive learning-based discrete unified particle swarm optimization; carrying out elite learning strategy-based local searching; regrouping populations after a certain number of times; and recombining the learning items when updating of a global optimal solution stalls. According to the mechanical part machining pipeline scheduling method,the limitation of the dynamic neighborhood and comprehensive learning-based unified particle swarm optimization in the field of production scheduling is improved; the defects that standard particle swarm optimization depends too much on parameters and is easy to fall into local optimum are overcome; the mechanical part machining pipeline scheduling method has the characteristics of high search accuracy, a high convergence rate and the like; and furthermore, the mechanical part machining pipeline scheduling method is relatively wide in application range and can be extended to the fields of manufacturing and process industries.
Owner:余姚市浙江大学机器人研究中心 +1

Distributed data management system of novel scheduling algorithm based on pipelines

The invention relates to the technical field of data management, in particular to a distributed data management system of a novel scheduling algorithm based on pipelines. The system comprises an infrastructure unit, a pipeline scheduling unit, a unified service unit and a data management unit; the infrastructure unit is used for managing a network architecture supporting system operation; the pipeline scheduling unit is used for managing scheduling of inter-process communication pipelines; the unified service unit is used for performing unified standardized management on the data; and the data management unit is used for comprehensively managing the data. According to the design, distributed data management can be realized, the fusion degree of the system is improved, and the ductility of the system is enhanced; the transmission effect in the data exchange process can be improved, and the integrity and safety of data are improved; through centralized scheduling management of the data, the architecture facilitates node expansion of the system, can support high-concurrency access of large users, can realize visual management of the full life cycle of the data, shortens the data management cycle, and reduces the cost waste.
Owner:中建材信息技术股份有限公司 +1

Pipeline scheduling method and device

The invention discloses an assembly line scheduling device, which comprises three physical assembly lines, two mix buffers and a bus pipe bus, and is characterized in that the three physical assemblylines are logically divided into five assembly lines, and the five assembly lines respectively correspond to five stages of processing messages in a chip, i.e., a parse stage, a bridge stage, a routerstage, a post stage and an egress stage; the two mix buffers are used for scheduling different services and then sending the different services to the corresponding logic assembly lines; and each pipeline unit pipe, the mix buffer and the pdsrc are hung on the bus pipe bus and are used for completing the interaction among the pipeline units. According to the invention, various services are optimized in the aspect of assembly line processing length, and different services can correspond to different processing lengths, so that unnecessary processing is reduced; the pipe bus is used for replacing the original connection relationship among the members of the assembly line, so that the members of the assembly line are connected more flexibly, the expandability is very high, and the new requirements in the future can be met. The invention further provides a corresponding assembly line scheduling method.
Owner:FENGHUO COMM SCI & TECH CO LTD +1

A Mixed Swarm Intelligent Optimization Method for Distributed Blocking Pipeline Scheduling

The invention discloses a mixed group intelligent optimization method for distributed blocking pipeline scheduling, which includes: performing cooperative initialization on multiple factories and multiple workpieces to generate a plurality of first factory workpiece processing sequences; calculating the processing sequence of each first factory workpiece The adaptive parameters of the sequence are adjusted according to the adaptive parameters to generate multiple workpiece processing sequences of the second factory; multiple workpiece processing sequences of the third factory are generated through local search and adjustment within the factory and between factories; some of the workpiece processing sequences of the third factory are selected for processing Regeneration mechanism, updating multiple third factory workpiece processing sequences to determine the current optimal factory workpiece processing sequence; return to the second step to iteratively update the current optimal factory workpiece processing sequence until the preset iteration termination condition is met to output the optimal Optimal factory workpiece processing sequence. This method combines adaptive search and local search, so that individuals in the population can adjust the search range by themselves, and balance the coarse search and fine search capabilities of the algorithm.
Owner:TSINGHUA UNIV

GPU-based N-body simulation program performance optimization method

The invention relates to an N-body simulation program performance optimization method based on a GPU (Graphics Processing Unit), which comprises the following steps of: transmitting related index information to the GPU, so that a process of constructing a short-range force list is migrated to the GPU, and meanwhile, the process of constructing the list is parallelized; changing a thread block scheduling mode, and loading particle information into a shared memory of the GPU in turn through pipeline scheduling of the GPU; calculating a short-range acting force in a GPU core function by adopting an interpolation polynomial and mixing precision, transmitting the calculated interpolation constant to the GPU after the interpolation constant is calculated on the CPU, and storing the interpolation constant in a shared memory of the GPU; reordering short-range force calculation results of all particles on the GPU and then enabling the reordered short-range force calculation results to besubjected to protocol merging in a GPU global memory, and after calculation of all the particles is completed, transmitting a final result back to the CPU. According to the method, the data transmission from the CPU memory to the GPU video memory is reduced, the delay of repeated memory access is reduced, the data access efficiency in the process of calculating the short-range force by the GPU is improved, the data transmission from the GPU video memory to the CPU memory is reduced, and the time for updating the information at the CPU end is also reduced.
Owner:COMP NETWORK INFORMATION CENT CHINESE ACADEMY OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products