Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

77results about How to "Reduce scheduling overhead" patented technology

Distributed system multilevel fault tolerance method under cloud environment

The invention provides a distributed system multilevel fault tolerance method under a cloud environment, which comprises a distributed application collaboration algorithm based on a virtual machine disk snapshot, which can back up the I/O state and the dependent operating system environment; a hierarchical fault detection and recovery mechanism, which can detect a physical layer, a virtual layer, a cloud platform layer, a virtual machine OS layer and an application layer fault in real time, and adopt the matched fault recovery method for different faults. Thus, the fault detection and recovery can be refined to modules, and the strategy of top-down stepwise recovery is adopted to minimize the recovery overhead; based on the virtual fault tolerance cluster service deployment strategy of the template, a user can use the virtual machine template to perform one-click deployment on the virtual machine fault tolerance cluster and upload the operation to perform collocation, and use the authorized fault-tolerant PaaS service. The invention can effectively solve the problems that the existing cluster deployment is complicated and the fault tolerance overhead is expensive, and can cope with the distributed application fault at all levels under the cloud computing environment in all directions.
Owner:HUAZHONG UNIV OF SCI & TECH

A method of admission control and load balancing in a virtualized environment

The invention relates to an access controlling and load balancing method for a virtualized environment, which comprises the following steps of: 1, correcting a simple earliest deadline first scheduling algorithm in a Xen virtual machine, and realizing a NWC-PEDF (Non-Work-Conserving Partition Earliest Deadline First) scheduling algorithm; 2, introducing an access controlling mechanism for each physics processing unit, and controlling the load of a Xen virtual processor allocated on each physics processing unit; 3, controlling the allocation and the mapping of a VCPU (Virtual Center ProcessingUnit) in a Xen virtual environment to a PCPU (Physics Center Processing Unit) on a multicore hardware platform by adopting a first adapting strategy so as to ensure the balance of load on each PCPU; and 4, providing a support mechanism for adjusting the scheduling parameter of the VCPU, and allowing a manager to adjust the scheduling parameter of the VCPU according to the variation condition of the load of the virtual machine in the running process. The invention has the advantages that the requirement of the virtual environment of the multi-core hardware platform on the hard real-time tasks of an embedded real-time system is met, improvement is made on the scheduling algorithm in the Xen virtual machine, and the access controlling and load balancing mechanism is realized.
Owner:ZHEJIANG UNIV

Hybrid task scheduling method of directed acyclic graph (DGA) based reconfigurable system

The invention discloses a hybrid task scheduling method of a directed acyclic graph (DGA) based reconfigurable system. The hybrid task scheduling method includes decomposing an application into multiple tasklets described by DGA, and scheduling the tasklets through a scheduler; allowing software tasks to enter a queue Q1, and calculating the software tasks in the queue Q1 according to CPU idling condition and scheduling priority after the software tasks are managed through a task manager; allowing hardware tasks to enter a queue Q2, and further allowing the hardware tasks to enter a queue Q3 if the hardware tasks in the queue Q2 are capable of reutilizing a reconfigurable resource, otherwise, keeping the hardware tasks queuing up in the queue Q2 according to the priority and then configuring and loading through a loader; completing the process of configuring and loading or allowing the tasks in the queue Q3 to enter a queue Q4, allowing the tasks in the queue Q4 to enter a queue Q5 after the tasks are managed via the task manager, then running the tasks according to the priority, sequentially circulating until finishing running of all the tasks, and finally feeding back the total running time. The Q1 refers to the software task queue, the Q2 refers to the preconfigured hardware task queue, the Q3 refers to the configuration reuse queue, the Q4 refers to the configuration completion queue, and the Q5 refers to the running task queue. Configuration frequency is reduced by a configuration reuse strategy, so that the overall scheduling overhead is reduced.
Owner:JIANGSU UNIV OF SCI & TECH

Task scheduling method and device of smart home operating system and storage medium

PendingCN111143045ASolve technical problems such as large scheduling overheadMeet real-time requirementsProgram initiation/switchingComputer controlOperational systemComputer science
The invention provides a task scheduling method and device of a smart home operating system and a storage medium. The method comprises the steps: determining the priority of a to-be-executed task through a first scheduler according to input parameters, wherein the input parameters comprise at least one of the importance degree of the to-be-executed task, the urgency degree of the to-be-executed task and the task period of the to-be-executed task; according to the priorities of the tasks to be executed, scheduling the to-be-executed tasks in the task queue; and under the condition that the priorities of the first task and the second task are the same, instructing the second scheduler to schedule the first task and the second task according to the deadlines of the first task and the second task, wherein the first task is the task with the highest priority in the task queue, and the second task is the currently executed task. By means of the task scheduling method and device, the technical problem that when tasks are scheduled in a smart home system, the scheduling expenditure is large can be solved, the real-time requirement of the system can be better met, and meanwhile the scheduling expenditure of the system is reduced.
Owner:QINGDAO HAIER TECH

Service data processing method and device and micro-service architecture system

The invention relates to the technical field of micro-services, and particularly discloses a service data processing method and device and a micro-service architecture system. The micro-service architecture system comprises a main processing thread module, a shared memory, a pulling thread module, a distributed message publishing and subscribing system and a stream processing system; the main processing thread processing module receives the service message, analyzes the service message to obtain a service data stream, and sends the service data stream to a message receiving queue of the distributed message publishing and subscribing system; the main processing thread module also obtains a calculation result from the shared memory and returns the calculation result to the calling party; the flow processing system obtains the service data flow from the message receiving queue and performs logic calculation on the service data flow to obtain a calculation result and writes the calculation result into a message return queue of the distributed publishing and subscribing message system; and the pull thread module obtains the calculation result from the message return queue and stores the calculation result in the shared memory. According to the scheme, the system throughput and the server resource utilization rate can be improved.
Owner:INDUSTRIAL AND COMMERCIAL BANK OF CHINA

A scheduling information receiving method and device

The invention discloses a scheduling information receiving method. The method comprises the following steps: acquiring downlink control information (DCI); And determining the scheduling information corresponding to the PUSCH in the DCI according to the mapping relation between the transmission configuration resources used by the physical uplink shared channel PUSCH and the scheduling information in the DCI. Compared with the prior art, the method has the advantages that the cost is low;, According to the invention, the scheduling information in the DCI is determined according to the mapping relation between the transmission configuration resources used by the UE for transmitting the PUSCH and the scheduling information in the DCI; According to the embodiment of the invention, the base station can only send one DCI (Downlink Control Information), so that the scheduling of all UE (User Equipment) with a mapping relation between the PUSCH transmission configuration resources and the scheduling information in the DCI is realized, the scheduling overhead is reduced, the resource waste is reduced, and the scheduling efficiency of the communication system on the terminal is remarkably improved.
Owner:BEIJING SAMSUNG TELECOM R&D CENT +1

Data reading method and device based on DMA engine and data transmission system

ActiveCN112199309AMeet the needs of accessing multiple destination storage terminalsReduce scheduling overheadElectric digital data processingComputer hardwareData transport
The invention provides a data reading method and device based on a DMA engine and a data transmission system, and the method comprises the steps: receiving a reading request command of a user side, caching the reading request command to a first memory according to a first sequence, sending the reading request command to the user side according to the first sequence, and enabling the first memory to be in one-to-one correspondence with the user side; sending the read request command to a corresponding destination storage end, recording a second sequence by adopting a second memory, the second sequence being a user end sequence corresponding to the read request command received by the destination storage end, and the second memory being in one-to-one correspondence with the destination storage end; caching the returned data to third memories according to a second sequence, enabling one destination storage end to correspond to a plurality of third memories, and enabling the third memoriescorresponding to the destination storage ends to be in one-to-one correspondence with the user ends; and transmitting return data corresponding to the read request command to the user side accordingto the first sequence. Compared with the prior art, the method has the advantages that the number of DMA engines and interfaces is reduced, and then the scheduling overhead is reduced.
Owner:BEIJING ZETTASTONE TECH CO LTD +1
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products