Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

233results about How to "To achieve load balancing" patented technology

Job scheduling system suitable for grid environment and based on reliable expense

The invention relates to an operation scheduling system which is applicable to the grid environment and based on the reliability cost; as indicated in graph 1, the whole system includes three layers: the first layer is an operation submission interface module 1; the second layer is an operation scheduling module 2 and the grid resource platform 7 as the substrate layer. From the perspective of the operational principle, the core of the invention is the operation scheduling module in the second layer, which includes a pre-scheduling module 3, a scheduling strategy module 4, an operation finish time prediction module 5 and a resource information module 6. The operation scheduling system in the invention proposes an operation running time prediction model and a resource usability prediction model; the operation running time prediction model based on the mathematical model and the resource usability prediction model based on the Markov model have high accuracy and high generality. The operation scheduling system adopts the copy fault-tolerance strategy, the primary copy asynchronous operation fault-tolerance strategy and the retry fault-tolerance respectively according to different operation service quality requirements and resource characteristics so that the operation scheduling system has high flexibility and high validity; meanwhile, the operation scheduling system supports the computation-intensive operation and the data-intensive operation to have good generality. Compared with the scheduling system in the prior art, the operation scheduling system has the advantages of supporting more concurrent users, improving the resource utilization rate, good generality, good extensibility and high system throughput.
Owner:HUAZHONG UNIV OF SCI & TECH

Sparse matrix data storage method based on ground power unit (GPU)

InactiveCN102436438AReduce reduction stepsAvoid thread idlingComplex mathematical operationsDisplay memoryEngineering
The invention discloses a sparse matrix data storage method based on a ground power unit (GPU). The method comprises the following steps of: 1), sequencing the line length array length [] according to ascending order; 2), classifying the array length [] into four sections of [0, 8), [8, 16), [16, 32), [32, +infinity) according to the number of every line of non-zero element; respectively combining the 32nd, 16th, 8th, 4th lines in every section; 3], zeroizing the line in every data section and performing the line filling operation on every data section, wherein the element of the filled line is zero completely; 4], generating three one-dimensional arrays of cval [], ccol_ind [], crow_ptr [] of the SC-CSR format. In the method of the invention, the line length change amplitude of every line is reduced via segment treatment, thereby reducing the load unbalance between the thread bunch and the thread block; the adjacent lines are staggered and combined to avoid the resource waste of the thread bunch calculation when the non-zero element is less than 32, and to improve the efficiency of joint access of the CUDA display memory and decrease the step of calculating kernel and reducing lines, and therefore obviously improving the calculating performance of the vector multiplication of the sparse matrix.
Owner:HUAZHONG UNIV OF SCI & TECH

Multithreading processor realizing functions of central processing unit and graphics processor and method

The invention relates to a multithreading processor realizing the functions of a central processing unit and a graphics processor. The multithreading processor comprises a graphics fixed-function processing module, a multithreading parallel central processing module and a memory module, wherein the graphics fixed-function processing module is used for processing the fixed function of data in graphics processing; the multithreading parallel central processing module implements a central processing function and a programmable processing function in the graphics processing through unified thread scheduling and interacts graphics data after the programmable processing function and the graphics fixed-function processing module by the memory module; and the memory module supplies an unified storage space for the graphics fixed-function processing module and the multithreading parallel central processing module for realizing the storage, buffering or / and interaction of the data. The invention also relates to a data processing method. The multithreading processor and the method realizing the functions of the central processing unit and the graphics processor have the beneficial effect of realizing load balance among a plurality of threading processing engines.
Owner:SHENZHEN ZHONGWEIDIAN TECH

Computer architecture scheme parallel simulation optimization method based on cluster system

The invention discloses a parallel method for simulating and optimizing the computer architecture scheme based on a cluster system, and aims to provide a parallel method for simulating and optimizing the design scheme of the computer architecture. The technical scheme is that a parallel computer system which consists of a main control node and simulation nodes and is provided with a remote command execution environment is firstly built, and a global configuration program, a simulated configuration file generating program, a task dispatching program and a result analyzing program are arranged on the main control node, wherein, the global configuration program is used for arranging global configuration; the simulated configuration file generating program is used for generating all simulated configuration files; the task dispatching program distributes simulation evaluation tasks to each node, controls each simulation node and performs simulation evaluation; and the result analyzing program searches simulation result files sent from the simulation nodes for statistics, screens out optimal configuration parameter values, and outputs a report. By adopting the invention, the time for evaluation and optimization can be reduced, and the selection accuracy is improved.
Owner:NAT UNIV OF DEFENSE TECH

SAN stored resource unified management and distribution method

The invention provides an SAN stored resource unified management and distribution method; a system comprises a server agent, a storage server agent and a management server, wherein an application server is provided with an application server agent, and the application server agent comprises a host information collection module, a starter management module and a disk management module, and the modules are used for realizing host information collection and registering in a resource management server; a target CHAP is authenticated and connected; a logical volume is automatically formatted into a specified file system on demand; resource use information is stored and collected, so as to realize the global view of the SAN environment resource use information; the distribution of the application server to storage server resources is realized on the same management server, and the division of logical partitions of the application server and the formatting of the file system are completed; the connection between the starter and the target between the application server and the storage server is established and the CHAP authentication is configured; and a page displays the use information of the resources of each application server and the utilization status of the resources of each storage server, so as to prevent resource idle and realize the balanced load between the storage servers.
Owner:LANGCHAO ELECTRONIC INFORMATION IND CO LTD

Load balancing method among nonadjacent heterogeneous cells in ubiquitous network

The invention discloses a load balancing method among nonadjacent heterogeneous cells in a ubiquitous network, and mainly resolves the problem that in the prior art, when load of an adjacent cell is too heavy and load to be transferred of a current cell cannot be received, load balancing cannot be achieved. The load balancing method includes that load information of the current cell, the adjacent cell and a double bounce cell is collected periodically through wireless access points (AP), and whether load balancing is to be achieved or not is judged according to whether uniformization load of the current cell exceeds a load balancing starting threshold or not; according to the load information of the adjacent cell and the double bounce cell, whether load is transferred to the adjacent cell or to the double bounce cell is judged; a switching target is chosen, and according to switching cost of the switching target, the optimum load transferring cell with the minimum switching cost is chosen; and the switching target is indicated to be switched into a corresponding cell to complete the load balancing. According to the load balancing method, the load balancing among nonadjacent heterogeneous cells is achieved, the range of the load balancing is enlarged, network performance and the resource utilization rate are improved, and the load balancing method can be used for resource optimization in a heterogeneous communication environment.
Owner:XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products