Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

294 results about "Task parallelism" patented technology

Task parallelism (also known as function parallelism and control parallelism) is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing tasks—concurrently performed by processes or threads—across different processors. In contrast to data parallelism which involves running the same task on different components of data, task parallelism is distinguished by running many different tasks at the same time on the same data. A common type of task parallelism is pipelining which consists of moving a single set of data through a series of separate tasks where each task can execute independently of the others.

Express cabinet delivery control method, express cabinet pickup control method and control device

The invention provides an express cabinet delivery control method, an express cabinet pickup control method and a control device. The express cabinet delivery control method comprises the steps of receiving order information, sending the order information to a server to enable the server to search for nearby express bins according to address information and reserve an empty express bin, receiving and displaying the address information of the reserved express bin, establishing connection with an express cabinet to open the express bin when the distance to the reserved express bin is within a preset range, and receiving express item information and sending the express item information to the server to enable the server to inform a receiver for pickup. By the adoption of the express cabinet delivery control method, the express cabinet pickup control method and the control device, connection with multiple senders/receivers can be achieved at the same time, so that correlation set between a main cabinet and an auxiliary cabinet and human-computer interaction of the main cabinet are not needed and the multi-task execution access mode is realized; users do not need to wait in line, and the number of express cabinet bins can be increased without limitation so as to meet the requirements of different application occasions.
Owner:SF TECH

Platform architecture supporting multi-GPU (Graphics Processing Unit) virtualization and work method of platform architecture

The invention provides a platform architecture supporting multi-GPU (Graphics Processing Unit) virtualization and a work method of the platform architecture. The platform architecture is used as a transmission medium by deploying middleware at a GPU server end and the end part of a virtual machine and using modes such as socket or infiniband to make up the defect that an original virtual machine platform cannot accelerate by using the GPU. The platform architecture can be used for managing the GPU resources through one or more centrally-controlled management nodes, carrying out fine grit division on the GPU resources and providing the function multitasking and execution. The virtual machine requests the GPU resources for the management nodes through the middleware and accelerates by using the GPU resources. The GPU server is used for registering the GPU resources for the management nodes through the middleware and providing service by using the GPU resources. According to the platform architecture disclosed by the invention, the parallel processing capability of the GPU is introduced into the virtual machine; and the utilization rate of the GPU is increased to the maximum extent by combining a management mechanism. According to the platform architecture, energy consumption can be effectively reduced and the calculation efficiency is increased.
Owner:NANJING UNIV OF AERONAUTICS & ASTRONAUTICS

Task scheduling processing method and device and computer equipment

The invention relates to a task scheduling processing method and device and computer equipment. The method comprises the steps of constructing a directed acyclic graph based on a dependency relationship between tasks, constructing a task scheduling queue by performing deep traversal on the constructed directed acyclic graph, and finally controlling scheduling and execution of the tasks based on the dependency relationship between the task scheduling queue and the tasks. The different tasks with the dependency relationship are executed in series according to the dependency relationship, and atleast part of the different tasks without the dependency relationship is executed in parallel. Based on a directed acyclic graph capable of reflecting a task dependency relationship, at least part ofthe different tasks are executed without the dependency relationship in parallel. The utilization rate of computing resources can be increased to a certain extent. Meanwhile, computing efficiency of the tasks is improved, in addition, the dependent tasks are executed in series according to the dependency relationship between the tasks, repeated execution of the tasks on the dependent preposed tasks is avoided, and the task execution efficiency is further improved.
Owner:TENCENT TECH (SHENZHEN) CO LTD

Multilevel multitask parallel decoding algorithm on multicore processor platform

ActiveCN105992008AOptimal Design StructureImprove the design structure to more effectively play the function of the processorDigital video signal modificationRound complexityImaging quality
The invention discloses a multilevel multitask parallel decoding algorithm on a multicore processor platform, and provides the multilevel multitask parallel decoding algorithm for effective combination of tasks and data on the multicore processor platform by utilizing the dependency of HEVC data by aiming at the problems of mass data volume of high-definition videos and ultrahigh processing complexity of HEVC decoding. HEVC decoding is divided into two tasks of frame layer entropy decoding and CTU layer data decoding which are processed in parallel by using different granularity; the entropy decoding task is processed in parallel in a frame level mode; the CTU data decoding task is processed in parallel in a CTU data line mode; and each task is performed by an independent thread and bound to an independent core to operate so that the parallel computing performance of a multicore processor can be fully utilized, and real-time parallel decoding of HEVC full high-definition single code stream using no parallel coding technology can be realized. Compared with serial decoding, the decoding parallel acceleration rate can be greatly enhanced and the decoding image quality can be guaranteed by using the multicore parallel algorithm.
Owner:NANJING UNIV OF POSTS & TELECOMM

YARN resource allocation and energy-saving scheduling method and system based on service level agreement

The present invention discloses a YARN resource allocation and energy-saving scheduling method and system based on a service level agreement. The YARN resource allocation and energy-saving scheduling method comprises the following steps of: before submitting MapReduce programs, carrying out pre-analysis on the MapReduce programs and analyzing required performance indexes from previous running logs of the programs; after submitting the MapReduce programs, calculating the minimum task parallelism degrees based on a completion time upper limit according to the performance indexes of the MapReduce programs; according to different parallelism degrees of each MapReduce program allocating quantitative resources to the MapReduce program by an SLA resource scheduler; monitoring the task completion condition of each MapReduce program so as to obtain ideal execution time and frequencies of residual tasks; and according to expected execution frequencies of the residual tasks, dynamically regulating a voltage and a frequency of a CPU by utilizing a CPUfreq subsystem so as to fulfill the aim of saving energy. According to the present invention, on the premise of ensuring the service level agreement of the MapReduce program, the quantitative resources are allocated to the MapReduce program; and a dynamic voltage frequency regulating technology is combined to reduce energy consumption in a cloud calculation platform to the greatest extent.
Owner:SHANDONG UNIV

Mobile phone multi-station parallel testing device and realization method thereof

The present invention discloses a mobile phone multi-station parallel testing device and a realization method thereof, and the problems of complexity and low efficiency of existing mobile phone testing equipment are solved. The device comprises a sound insulation box body with a cabinet door, a multi-station parallel testing mechanism arranged in the sound insulation box body, an industrial control machine and a control circuit board. The multi-station parallel testing mechanism comprises a multi-station mobile phone carrier for placing multiple mobile phones, and testing stations fixed in the sound insulation box body and are corresponding to the positions of the multi-station mobile phone carrier for placing mobile phone, and at least two testing stations in the testing stations are used for testing different projects. The multi-station parallel testing mechanism also comprises a central rotation mechanism for rotating the multi-station mobile phone carrier such that the mobile phones placed on the multi-station mobile phone carrier are moved to another station from a current testing station. According to the mobile phone multi-station parallel testing device and the realization method, through time division multiplexing and multi-task parallel test, the working efficiency is four times of a traditional testing process in the condition of not adding hardware resources.
Owner:CHENGDU BOTOVISION TECH CO LTD

Large power grid overall situation on-line integrated quantitative evaluation method based on response

The invention provides a large power grid overall situation on-line integrated quantitative evaluation method based on a response. The method comprises a first step of acquiring power grid topology structural information from an SCADA system and an EMS system and establishing a corresponding relation between the power grid topology structural information and a WAMS system power grid component, a second step of acquiring present power grid operation method trend data from the SCADA system or the EMS system or a WAMS system or acquiring various preconceived trends or transient state fault time domain data from a DSA system, and a third step of dividing responding data into two operation scenes including a steady state (or a quasi-stable state) and a transient state (or a dynamic state) in a macroscopic mode, wherein static state stabilization situation assessment is carried out on a power grid through an on-line node-facing method, and transient state stabilization situation assessment is carried out on the power grid through an on-line unit-facing method. Static state and transient state comprehensive assessment indicators are constructed based on component class thermostabilization, electric parameter acceptable range and system class stabilization, the comprehensiveness and reasonability of the overall situation assessment indicators are improved, and the efficiency of overall situation integrated assessment is improved through the method that tasks are carried out at the same time.
Owner:STATE GRID CORP OF CHINA +1

Method and device for managing download of mobile communication equipment terminal browser

The invention relates to the file downloading technology for a mobile communication equipment terminal browser, particularly to a method and device for managing download of a mobile communication equipment terminal browser. A method for managing download of a mobile communication equipment terminal browser comprises the following steps of: firstly, establishing a multi-task supporting web browsing engine and a multi-task supporting backstage download engine in a main thread of a mobile communication equipment terminal system; secondly, executing a file download task through the backstage download engine when the mobile communication equipment terminal receives a file download request, and executing the web browsing task through the web browsing engine when the mobile communication equipment terminal receives a web browsing request, wherein the task of the backstage download engine and the task of the web browsing engine are executed in parallel, the maximum number of the download tasks of the backstage download engine can be preset according to user requirements. Due to the adoption of the backstage download technology, the downloading and the web browsing can be carried out at the same time and a plurality of tasks can be downloaded simultaneously, so that resources are sufficiently utilized.
Owner:ALIBABA (CHINA) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products