Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Distributed management framework based on extensible and high-performance computing and distributed management method thereof

A high-performance computing and distributed technology, applied in electrical components, transmission systems, etc., can solve problems such as the inability to meet larger data computing requirements, and achieve the effect of avoiding single-point failure, improving utilization, and low computing power

Pending Publication Date: 2017-06-06
江苏十月中宸科技有限公司
View PDF7 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] Aiming at the current mainstream centralized management architecture, it is obviously unable to meet the computing requirements of larger data in the future

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Distributed management framework based on extensible and high-performance computing and distributed management method thereof
  • Distributed management framework based on extensible and high-performance computing and distributed management method thereof
  • Distributed management framework based on extensible and high-performance computing and distributed management method thereof

Examples

Experimental program
Comparison scheme
Effect test

example 1

[0048] Example 1: The algorithm of task 1 is composed of two basic calculation cores, namely the sha1 calculation core and the aes128 calculation core.

[0049] Step 1: The user uploads the task to the management terminal, and the management terminal analyzes task 1. By calculating the computing resources of each computing node, it is obtained that 30% of the total tasks run by node 1 and 70% of the total tasks run by node 2 can reach Load balancing.

[0050] Step 2: Send 30% of the tasks to computing node 1 through the network port through the task sending unit, and send 70% of the tasks to computing node 2 through the network port.

[0051] Step 3: After the device node receives the data, the task is handed over to the scientific computing card for calculation. The scientific calculation card obtains the operation result, and uploads the result to the management terminal through the network data port.

[0052] Step 4: The management end obtains the data reported by each de...

example 2

[0053] Example 2: The algorithm of task 2 consists of three basic calculation cores, namely des calculation core, md4 calculation core and sha1 calculation core.

[0054] Step 1: The user uploads the task to the management terminal, and the management terminal analyzes task 2. By calculating the computing resources of each computing node, it is obtained that when node 1 runs 30% of the total tasks and node 2 runs 30% of the total tasks, node 3 In the case of running 40% of the total tasks, load balancing can be achieved.

[0055] Step 2: Through the task delivery unit, send 30% of the total tasks to the computing node 1 through the network port, send 30% of the total tasks to the computing node 2 through the network port, and send 40% of the total tasks to the computing node 2 through the network port Send it to computing node 3.

[0056] Step 3: After the device node receives the data, the task is handed over to the scientific computing card for calculation. The scientific ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a distributed management framework based on extensible and high-performance computing and a distributed management method thereof. The distributed management framework is composed of an integrated management system, a centralized management system and a distributed management system. The integrated management system and the centralized management system are connected through a local area network. The integrated management system and the distributed management system are connected through the local area network. The centralized management system is mainly composed of multiple servers equipped with high-performance scientific computing cards. The servers manage all the scientific computing cards in a centralized way and perform unified computing. The distributed management system is composed of multiple common computers equipped with the high-performance scientific computing cards. The servers manage all the scientific computing cards in the centralized way and perform unified computing. The requirement for the single computing node and hardware is low by using distributed management so that various idle resources can be effectively utilized and the utilization rate of the common computers can be greatly enhanced.

Description

technical field [0001] The invention relates to a distributed management framework and method based on scalable and high-performance computing. Background technique [0002] At present, most high-performance computing systems in the industry are based on a centralized management architecture, that is, all high-performance computing cards are centralized for computing management. This architecture has many problems. [0003] Resources cannot be deployed quickly, and thus their computing capabilities cannot be flexibly improved. [0004] The resource utilization rate is not high. When the distributed computer is not working, a large number of idle computing resources are not utilized. [0005] The high hardware cost of the traditional architecture also limits its scalability. [0006] Aiming at the current mainstream centralized management architecture, it is obviously unable to meet the computing requirements of larger data in the future. [0007] The purpose of the presen...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04L29/08
CPCH04L67/10
Inventor 张涛邓佳伟
Owner 江苏十月中宸科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products