Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

57results about How to "Improve computing resource utilization" patented technology

Multi-unmanned aerial vehicle auxiliary edge computing resource allocation method based on task prediction

The invention discloses a multi-unmanned aerial vehicle auxiliary edge computing resource allocation method based on task prediction. The method comprises the following steps of: firstly, modeling a communication model, a computing model and an energy loss model in an unmanned aerial vehicle auxiliary edge computing unloading scene; modeling a system total energy consumption minimization problem of the unmanned aerial vehicle auxiliary edge computing unloading network into task predictable process of terminal devices; obtaining prediction model parameters of different terminal devices by adopting centralized training through accessing historical data of the terminal devices; obtaining a prediction task set of the next time slot by utilizing the prediction model based on the task information of the current access terminal devices; and based on the prediction task set, decomposing an original problem into an unmanned aerial vehicle deployment problem and a task scheduling problem for joint optimization. The response time delay and completion time delay of the task can be effectively reduced through the deep learning algorithm, so that the calculation energy consumption is reduced; anevolutionary algorithm is introduced to solve the problem of joint unmanned aerial vehicle deployment and task scheduling optimization, the hovering energy consumption of the unmanned aerial vehicleis greatly reduced, and the utilization rate of computing resources is increased.
Owner:DALIAN UNIV OF TECH

Multi-workflow scheduling method based on genetic algorithm under cloud environment

The invention discloses a multi-workflow scheduling method based on a genetic algorithm under a cloud environment. The method comprises the following steps that a previous workflow scheduling state isreserved, the genetic algorithm and a new workflow are initialized, the adaptability degree of each individual of the new workflow is calculated, and two parent individuals are selected; according tothe genetic algorithm, the parent individuals are subjected to cross operation and single-point variation, progeny individuals are obtained, the adaptability degrees of the progeny individuals are calculated, the adaptability degrees of the progeny individuals and the corresponding parent individuals are compared, and two smaller progeny individuals are selected and added to the progeny population; if the size of the progeny population is equal to that of the parent population, the progeny population and the parent population are merged, the individuals which accord with the genetic algorithmare selected from the merged population to form the new population, and otherwise, the step of selecting the parent individuals again is skipped to; finally, according to the iteration frequency, optimal scheduling is output. According to the multi-workflow scheduling method based on the genetic algorithm under the cloud environment, the situation is avoided that previous workflow scheduling is damaged so that additional communication cost can be generated, and the utilization rate of computing resources of a virtual machine is further increased.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

Matrix multiplication accelerating method for CPU+DSP (Central Processing Unit + Digital Signal Processor) heterogeneous system

The invention discloses a matrix multiplication accelerating method for a CPU+DSP (Central Processing Unit + Digital Signal Processor) heterogeneous system and aims at providing an efficient cooperative matrix multiplication accelerating method for the CPU+DSP heterogeneous system to increase the operation speed of matrix multiplication and maximize the computing efficiency of the CPU+DSP heterogeneous system. According to the technical scheme, the method comprises the following steps of firstly, initializing parameters, and performing information configuration on the CPU+DSP heterogeneous system; secondly, partitioning to-be-processed data which are allocated to computing nodes to a CPU and a DSP for cooperative processing according to the difference between the design target and the computing performance of the CPU of a main processor and the DSP of an accelerator; thirdly, concurrently performing data transmission and cooperative computation by the CPU and the DSP to obtain (ceiling of M / m*ceiling of N / n) block matrixes C(i-1)(j-1); finally, merging the block matrixes C(i-1)(j-1) to form an M*N result matrix C. With the adoption of the method, when being in charge of data transmission and program control, the CPU can actively cooperate with the DSP to complete matrix multiplication computation; moreover, the data transmission and the cooperative computation are overlapped, so that the matrix multiplication computation speed of the CPU+DSP heterogeneous system is increased, and the utilization rate of computation resources is improved.
Owner:NAT UNIV OF DEFENSE TECH

Service-oriented computing power network system, working method and storage medium

The invention discloses a service-oriented computing power network system, a working method and a storage medium. The computing power network system comprises a computing power service sensing module, a network service sensing module, a service query processor, a strategy and selection decision maker, a computing power service scheduler, a network forwarding control module and network routing forwarding equipment. The working method comprises the following steps: S1, a computing power service information sensing mechanism; s2, a network service information perception mechanism; and S3, computing power service distribution scheduling. According to the computing power network system, the scattered edge computing nodes are connected into a network, computing resources of the distributed edge nodes can be fully utilized, and the computing resource utilization rate of the edge computing nodes is increased; and by sensing the network state and the computing power of the edge computing node, the computing task can be distributed to the optimal edge computing node through the optimal network path, so that the performance optimization of the edge computing network is realized, and the distribution efficiency of the computing task is improved.
Owner:PURPLE MOUNTAIN LAB

Resource allocation method and system

ActiveCN108052396ASolve the problem of unbalanced distribution and low utilization of computing resourcesIncrease profitResource allocationProgram synchronisationUser inputResource utilization
The invention discloses a resource allocation method and system, which are suitable for use in the field of resource scheduling technology. The method includes: acquiring a starting instruction inputby a user, starting a service program according to the starting instruction, and generating at least one thread; acquiring at least one thread requesting computation resources of the same hardware acceleration card ; allocating a service mutex lock to a target thread according to a time order of thread application; allocating the target thread to a target computation unit of a smallest number of queuing-up threads, and releasing the service mutex lock of the target thread; processing service data of the target thread if the number of queuing-up threads before the target thread is zero; allocating a service mutex lock to the target thread; and resetting a mark bit of the target thread, enabling a moving bit pointer of a queue, where the target thread is located, to point to a next thread ofwhich service data are to be processed, releasing the service mutex lock of the target thread, and canceling the target thread. According to the method, a computation resource utilization rate can besignificantly increased, and an application value of the hardware acceleration card can be increased.
Owner:深圳市恒扬数据股份有限公司

Pre-distribution method for edge domain resources in edge computing scene

The invention discloses a pre-distributing method for edge domain resources in an edge computing scene, the method comprises the following steps of: predicting an arrival rate and measuring a forwarding delay, determining the type of a service pre-cache according to the arrival rate and the weight of a service in a first-level edge domain, and determining the distribution proportion of each service according to a first delay; determining the type of the service pre-cache according to the second time delay in the second-level edge domain, and obtaining an initial cache scheme through an interior point method; obtaining a new resource caching scheme by randomly selecting a service caching type, selecting a scheme with relatively small average time delay from an initial caching scheme and thenew resource caching scheme to serve as a final pre-distribution scheme, and thus completing pre-distribution. According to the method, the demand of the user in the next period is estimated by utilizing the statistical data, and the service type and quantity of the edge servers are pre-allocated according to the estimated data, so that more efficient resource allocation can be carried out, the resource utilization rate is improved, and the application delay is shortened.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

Deep neural network reasoning acceleration method and system based on multi-operator fusion

The invention relates to a deep neural network reasoning acceleration method and system based on multi-operator fusion, and the method specifically comprises the steps: inputting a neural network calculation diagram, obtaining a neural network calculation logic diagram, and obtaining a complete neural network forward calculation symbol expression according to the calculation relation between neural network operators; employing a fusible operator search method, and automatically simplifying the system by means of operator symbol expressions, simplifying a symbol expression of forward calculation of the neural network, obtaining the simplest symbol expression, and achieving multi-operator fusion; according to a multi-operator fusion result and the obtained simplest symbol expression, a new neural network calculation reasoning logic diagram is constructed, the simplest symbol expression is decoupled, offline calculation is performed, new model parameters are stored, and a corresponding neural network model structure is constructed; and finally, loading new model parameters to realize reasoning acceleration. According to the invention, the overhead of operator execution gaps can be reduced, the utilization rate of computing resources of equipment is improved, and the overall network reasoning speed is optimized.
Owner:ZHEJIANG LAB

On-rocket integrated comprehensive radio frequency system

An on-rocket integrated comprehensive radio frequency system comprises a comprehensive measurement module, a comprehensive radio frequency front end and a comprehensive measurement antenna, wherein the comprehensive measurement module is connected with the comprehensive radio frequency front end through a VPX bus interface; the comprehensive radio frequency front end is connected with the comprehensive measurement antenna; the comprehensive measurement module is used for collecting multiple analog quantities, processing telemetry data, externally measurement uplink and downlink communication,safety control uplink and self-destruction control and third-party monitoring; the comprehensive radio frequency front end is used for telemetering downlink amplification and filtering, external measurement uplink amplification and filtering, external measurement downlink method and filtering, and safety control uplink amplification and filtering; and the comprehensive measurement antenna is integrated with a remote / external measurement downlink antenna, an external measurement uplink antenna and a safety control uplink antenna. By the scheme, the functions of collecting, remote measuring, external measuring and safety control are integrated, scattered independent single-machine products are integrally designed into a 3U VPX module, a power amplifier and an antenna, the size and the systemcomplexity are reduced, and meanwhile the utilization rate of computing resources is increased.
Owner:CHINA ACAD OF LAUNCH VEHICLE TECH

Multi-dimensional heterogeneous resource quantification method and device based on industrial edge computing system

The invention belongs to the field of industrial internet, and particularly relates to a multi-dimensional heterogeneous resource quantification method and device based on an industrial edge computing system. The method comprises the following steps: acquiring multi-dimensional heterogeneous resource information and mixed task flow information; according to the multi-dimensional heterogeneous resources, analyzing the relationship between the multi-dimensional resources and the real-time computing power of edge computing equipment; according to the task flow information, analyzing computing power required by execution of different types of tasks; determining the matching degree of the task computing power demand and the real-time computing power supply of the heterogeneous edge computing device, determining whether the real-time computing power supply meets the task computing power demand or not, carrying out incremental learning on the relation between the multi-dimensional resources and the real-time computing power of the edge computing device. The quantization accuracy is continuously improved. According to the method, the overall utilization rate of scattered heterogeneous resources in the industrial edge computing system is improved, the network communication load and private data interaction are reduced, and the method plays a supporting role in industrial internet personalized flexible production.
Owner:SHENYANG INST OF AUTOMATION - CHINESE ACAD OF SCI

Method for cooperative computing offloading in F-RAN architecture

The invention provides a method for cooperative computing unloading in an F-RAN architecture, and provides an F-RAN unloading scheme based on NOMA and an unloading method based on SCA, an interior point method and a coalition game so as to efficiently utilize computing resources of edge nodes in a network. According to the unloading scheme, a task user can unload a computing task to a main F-AP associated with the task user and an idle user with idle computing resources based on NOMA, and the main F-AP further unloads the computing task to other auxiliary F-APs based on a cooperative communication function between the F-APs. Meanwhile, under the condition of considering user tolerance time delay, a layered iterative algorithm is provided, the inner layer of the algorithm is formed by combining SCA and an interior point method to obtain an unloading scheme after user association is determined, the outer layer of the algorithm is user association optimization based on the coalition game theory, the total energy consumption of the system is minimized, and the system reliability is improved. Compared with a common unloading scheme and algorithm in the prior art, the method has the advantage that the total energy consumption of the system is remarkably improved.
Owner:FUZHOU UNIV

Frame and communication method

The invention provides a frame and a communication method, and relates to the technical field of optical communication. The frame includes a frame housing, and also includes a network management control unit, a disk array and a data processing unit, wherein the network management control unit, the disk array and the data processing unit are arranged in a first frame housing, and are connected with external equipment through a backboard interface on the first frame housing; the network management control unit is used for processing a user command and performing format conversion on the user command, and then sending the user command to business board cards, the data processing unit and the disk array, and sending the business state fed back by the business board cards to the user; the data processing unit is used for processing the data in the business board cards, the network management control unit and the disk array, and reporting the processing result of the data processing unit in real time; and the disk array is used for storing the data in the network management control unit, the state information for each business board card, and the log information of the business board cards. For the frame, not only the reliability and maintainability of the frame is increased, but also the reaction capability and service life of the frame is improved.
Owner:ZTE CORP

Cooperative control method for underwater glider formation

The invention discloses a cooperative control method for an underwater glider formation. The cooperative control method comprises the following steps that step 1, each underwater glider in the formation obtains motion parameter data of the underwater glider and sends the motion parameter data to a formation control platform; 2, after the formation control platform receives the motion parameter data, a correction value of each underwater glider is calculated according to the motion parameter data and set parameter data; 3, a formation control platform sends the correction value to the corresponding underwater glider; and 4, each underwater glider receives the correction value and then adjusts navigation parameters according to the correction value. The formation control platform calculates the correction value of each underwater glider, so the operation resources of the underwater gliders are saved, and improvement of the utilization rate of the operation resources of the underwater gliders is facilitated; The formation control platform can master the motion parameter data of each member of the formation, and can know the position of each underwater glider in the formation in a more detailed manner, and thereby the formation pattern can be more accurately controlled.
Owner:THE PLA NAVY SUBMARINE INST

Electric charge calculating method, control server, calculating server and system

The invention discloses an electric charge calculating method, which comprises the following steps: the connection between a control server and a calculating server is set up; the control server obtains an electric charge calculating request sent by a client and a state parameter of the calculating server connected with the control server in real time; the control server generates an electric charge calculating order according to the electric charge calculating request and a state parameter, sends the electric charge calculating order to a corresponding calculating server; the calculating server performs the electric charge calculating order and generates an electric charge value; the calculating server sends the electric charge value to a data center to store; the calculating server generates an ending signal and sends to the control server. The invention further discloses a control server, a calculating server and an electric charge calculating system. By adopting the electric charge calculating method, the number of the calculating server can be dynamically adjusted, the pressure is spread on a plurality of calculating servers worked at the same time; and therefore, the utilization rate of the calculating resource is improved, and the elastic expansion of the resource is realized.
Owner:GUANGDONG POWER GRID CO LTD INFORMATION CENT +1

A pre-allocation method of edge domain resources in edge computing scenarios

The invention discloses a method for pre-allocating edge domain resources in an edge computing scenario, which includes the following steps: predicting the arrival rate and measuring the forwarding delay, and determining the service pre-cache value in the first-level edge domain according to the arrival rate and the weight of the service type, and determine the allocation ratio of each service according to the first delay; in the second-level edge domain, determine the type of service pre-caching according to the second delay, and obtain the initial cache solution through the interior point method; by randomly selecting the service cache type A new resource caching scheme is obtained by means of a new resource caching scheme, and the scheme with a smaller average delay is selected from the initial caching scheme and the new resource caching scheme as the final pre-allocation scheme, and then the pre-allocation is completed. The present invention uses statistical data to estimate the needs of users in the next period, and pre-allocates service types and quantities to edge servers according to the estimated data, so that the present invention can perform more efficient resource allocation and improve resource utilization rate and reduce application latency.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products