Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

106 results about "Global scheduling" patented technology

Information rapid access and emergency rescue airdrop robot system in disaster environment

The invention relates to an information rapid access and emergency rescue airdrop robot system in a disaster environment. The system comprises V-shaped rail wheels, a parachuting system, an automatic tripping device, an airdrop robot body, an airdrop robot landing unlocking device, a buffer landing system, a remote wireless communication system and sensors. The V-shaped rail wheels are mounted on two sides of a landing chassis. The parachuting system and the buffer landing system are connected through the automatic tripping device. The airdrop robot body is fastened in the buffer landing system through the airdrop robot landing unlocking device. The remote wireless communication system comprises a wireless communication module arranged on the airdrop robot body, and a remote console arranged in a rescue command center. The information rapid access and emergency rescue airdrop robot system can be airdropped to a disaster scene immediately after a disaster happens, and achieves intelligent control and precise fixed point airdrop, safe landing of the airdrop robot and real-time multi-sensor detecting remote information interaction in the airdropping process, so the information rapid access and emergency rescue airdrop robot system has great significance in emergency rescue and global scheduling.
Owner:TIANJIN UNIV OF TECH & EDUCATION TEACHER DEV CENT OF CHINA VOCATIONAL TRAINING & GUIDANCE

Apparatus and method for controlling a wireless feeder network

An apparatus and method are provided for controlling a wireless feeder network used to couple access base stations of an access network with a communications network. The wireless feeder network comprises a plurality of feeder base stations coupled to the communications network and a plurality of feeder terminals coupled to associated access base stations. Each feeder terminal has a feeder link with a feeder base station, and the feeder links are established over a wireless resource comprising a plurality of resource blocks. Sounding data obtained from the wireless feeder network is used to compute an initial global schedule to allocate to each feeder link at least one resource block, and the global schedule is distributed whereafter the wireless feeder network operates in accordance with the currently distributed global schedule to pass traffic between the communications network and the access base stations. Using traffic reports received during use, an evolutionary algorithm is applied to modify the global schedule, with the resultant updated global schedule then being distributed for use. This enables the allocation of resource blocks to individual feeder links to be varied over time taking account of traffic within the wireless feeder network, thereby improving spectral efficiency.
Owner:AIRSPAN IP HOLDCO LLC

GPU resource pool scheduling system and method

The invention discloses a GPU resource pool scheduling system, and the system comprises a GPU cloud computing power center and a GPU cloud control node; the GPU cloud computing power center comprisesa plurality of GPU computing power units, and each GPU computing power unit comprises a VMM and an RC, wherein the GPU cloud control node comprises an RS; the GPU computing power units are used for providing computing power of GPUs; each VMM is used for providing a control interface, receiving a resource scheduling instruction sent by the RS, creating a virtual machine according to the instruction, allocating vGPU resources to the virtual machine and starting the virtual machine; each RC is used for counting resource data of the GPU computing power unit and reporting the resource data to the RS; each RS is used for collecting resource data reported by each RC and sending a resource scheduling instruction to each VMM, and scheduling resources of the GPU computing power unit globally, including the steps of gathering GPU resources to form a plurality of groups of GPU hardware sets, and forming a GPU resource pool by the plurality of groups of GPU hardware sets. According to the system, unified resource pool scheduling management of various manufacturers, GPU models and GPU virtualization modes in a cloud computing platform is realized; the invention further discloses a GPU resource pool scheduling method.
Owner:WUHAN UNIV OF TECH +1

Method and system for scheduling delay slot in very-long instruction word structure

The invention discloses a method and a system for scheduling a delay slot in a very-long instruction word structure. The method comprises the steps of locally scheduling instructions in a current basic block; after the local scheduling is finished, judging whether a residual instruction delay slot exists, if not, ending the scheduling, otherwise, putting an instruction which can be filled into the instruction delay slot and is high in spending into a local standby instruction cache; globally scheduling instructions in a basic block of a branch target, selecting an instruction which can be filled into the instruction delay slot and placing the instruction in a global standby instruction cache; and selecting an instruction from the local standby instruction cache and/or the global standby instruction cache and filling the instruction into the residual instruction delay slot. The system comprises a local scheduling unit, a global scheduling unit and a balanced scheduling unit. According to the method and the system for scheduling the delay slot in the very-long instruction word structure disclosed by the invention, through balance between scheduling of the delay slot and program parallelism, as well as balance between local scheduling and global scheduling, high execution efficiency of programs can be implemented.
Owner:INST OF ACOUSTICS CHINESE ACAD OF SCI

Hierarchical collaborative decision-making intra-network resource scheduling method, system, and storage medium

The invention discloses a hierarchical collaborative decision-making intra-network resource scheduling method, a system, and a storage medium. The intra-network resource scheduling method comprises the steps of obtaining a computing power demand interest packet requested by an upstream network node in a computing power network; judging whether the current network node meets the computing power demand of the computing power demand interest packet or not, if yes, providing computing power service for data in the computing power demand interest packet according to a deployed performance function,and if not, providing forwarding service for the computing power demand interest packet. According to the technical scheme, the scheduling decision mechanism of the fine-grained local scheduling layer is optimized by means of global information, so that the utilization rate of resources such as global computing power and storage of the computing power network is improved, and load balancing of resources in the network is realized; besides, through the mode of combining the coarse-grained global scheduling layer and the fine-grained local scheduling layer, the non-end-to-end hierarchical collaborative decision-making intra-network resource scheduling capability is provided, the realization of an efficient and balanced intra-network resource scheduling function from the technical level is facilitated, and the overall performance of the computing power network is also improved.
Owner:PEKING UNIV SHENZHEN GRADUATE SCHOOL

Photoetching procedure dynamic scheduling method based on index forecasting and solution similarity analysis

The invention discloses a photoetching procedure dynamic scheduling method based on index forecasting and solution similarity analysis and belongs to the fields of advanced manufacture, automation and information. The dynamic scheduling method aims at photoetching procedure dynamic scheduling on a semiconductor production line. The method includes the steps that a photoetching procedure dynamic scheduling problem is divided into an equipment selection scheduling sub-problem and a workpiece sequencing scheduling sub-problem, and a performance index forecasting model of the workpiece sequencing scheduling sub-problem is established on line; then, an original scheduling problem is solved by utilizing a differential evolution algorithm based on solution similarity analysis. In the differential evolution algorithm, the performance index forecasting model of the workpiece sequencing scheduling sub-problem is used for performing quick rough estimation on global scheduling performance of solutions of the equipment selection scheduling sub-problem. In the estimation process on the scheduling solutions, a mode of combining acute estimation and the rough estimation is adopted in the method to perform the performance estimation on solutions in the differential evolution algorithm; by using the dynamic scheduling method, the efficiency and the effect of photoetching procedure production and scheduling can be remarkably improved.
Owner:正大业恒生物科技(上海)有限公司

Regulation and control cloud data processing method, device and system

The invention belongs to the field of data processing, and discloses a regulation and control cloud data processing method, device and system, and the method comprises the steps that leading node equipment obtains a global scheduling task, decomposes the global scheduling task to obtain a scheduling task, and transmits the scheduling task to cooperative node equipment; a data acquisition range anda data processing rule of each piece of cooperative node equipment are obtained, and the data acquisition range and the data processing rule are sent to each piece of cooperative node equipment; eachpiece of cooperative node equipment receives and executes the scheduling task issued by the leading node equipment; a data acquisition range and a data processing rule issued by the leading node equipment are received; acquired data in the data acquisition range is obtained based on the scheduling task, the acquired data is processed according to a data processing rule to obtain processed data, and the processed data is uploaded to the leading node equipment; and the leading node equipment receives the processing data sent by each piece of cooperative node equipment. The problem that the bandwidth pressure of a wide-area data network is large due to large computing and storage pressure of a cloud center and repeated uploading of data is solved, and the quality of regulation and control cloud data is improved.
Owner:CHINA ELECTRIC POWER RES INST

Power distribution network distributed scheduling method

PendingCN112183865ACut back-up burdenIncrease the adjustable rangeSingle network parallel feeding arrangementsForecastingGlobal schedulingEngineering
The invention discloses a power distribution network distributed scheduling method, which comprises the following steps of: firstly, establishing a power distribution network global scheduling model,giving global robust cost according to the maximum acceptable operation risk, and allocating robust cost coefficients to REG power stations in the power distribution network global scheduling model byminimizing the whole system standby requirement; then constructing a power distribution network distributed scheduling model through regional decomposition and model correction, finally, solving thepower distribution network distributed scheduling model through an adaptive step length ADMM algorithm, obtaining a scheduling result of the power distribution network, and scheduling the power distribution network. On the premise of ensuring the operation reliability of the system, the system energy consumption caused by the scheduling strategy is reduced, and the balance between the reliabilityand economy of the power distribution network can be realized. Meanwhile, when the power distribution network distributed scheduling model is constructed, the interaction process among the sub-regionsis considered, and the power distribution network distributed scheduling model is solved by adopting an adaptive step length ADMM algorithm, so that the solving efficiency of the system scheduling model is greatly improved.
Owner:HUAZHONG UNIV OF SCI & TECH +2
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products