Computing resource allocation method, device and storage medium based on hybrid distributed architecture

A technology of hybrid computing and computing resources, applied in the field of big data processing, can solve problems such as resource preemption, configuration process and user interface are not the same, can not meet the real-time requirements of computing tasks, etc., to achieve efficient and flexible scheduling, to meet the effect of diversity

Active Publication Date: 2022-05-24
NAT COMP NETWORK & INFORMATION SECURITY MANAGEMENT CENT
View PDF9 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In recent years, many deep learning frameworks built on Spark have been proposed. Although they are all set up on Spark, the configuration process and user interface are different, and resource preemption will also occur when multiple frameworks are used at the same time.
On the other hand, the core resources of traditional computer computing and processing generally refer to the central processing unit (CPU). However, with the development of artificial intelligence and other technologies, the computing requirements are also diversified, and the pure CPU can no longer meet the real-time performance of computing tasks. Require

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Computing resource allocation method, device and storage medium based on hybrid distributed architecture
  • Computing resource allocation method, device and storage medium based on hybrid distributed architecture
  • Computing resource allocation method, device and storage medium based on hybrid distributed architecture

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] In order to further illustrate the technical means and effects adopted by the present invention to achieve the predetermined purpose, the present invention will be described in detail below with reference to the accompanying drawings and preferred embodiments.

[0028] like figure 1 As shown, it is a distributed computing system based on hybrid computing resources provided by an embodiment of the present invention, including a computing engine layer 11, a hybrid computing encapsulation layer 13, and a resource scheduling layer 12, wherein:

[0029] The computing engine layer 11 is composed of multiple deep learning frameworks built on the same Spark computing engine;

[0030] The hybrid computing encapsulation layer 13 is used to uniformly encapsulate the access interfaces of each deep learning framework for the computing engine layer 11;

[0031] The resource scheduling layer 12 includes a variety of heterogeneous computing resources, and the heterogeneous computing r...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The present invention proposes a distributed computing system based on hybrid computing resources to rationally allocate resources and meet the requirements of diverse computing tasks. The system includes a computing engine layer and a resource scheduling layer, wherein: the computing engine layer consists of Composed of multiple deep learning frameworks built on the same Spark computing engine, the access interfaces of each deep learning framework are uniformly encapsulated for the computing engine layer; the resource scheduling layer includes a variety of heterogeneous computing resources, and the heterogeneous computing The resources include at least one of the following: CPU, GPU and FPGA; in the resource scheduling layer, different task queues are divided according to the task types of tasks to be processed, and different logical clusters are divided according to the types of computing resources carried by different physical machines. Task class for processing tasks, assigning tasks in the task queue to corresponding logical clusters for execution.

Description

technical field [0001] The present invention relates to the technical field of big data processing, and in particular, to a computing resource allocation method, device and storage medium based on a hybrid distribution architecture. Background technique [0002] The rise of big data technology has once again stimulated the vitality of artificial intelligence. The Go battle in 2016 once again detonated the upsurge of artificial intelligence. It is precisely because of the mature and stable big data technology support that the massive computing tasks behind AlphaGo can be completed. Apache Spark is a fast and general computing engine designed for large-scale data processing. It has the advantages of Hadoop MapReduce. However, unlike MapReduce, the intermediate output results of Job can be stored in memory, so there is no need to read and write HDFS. (distributed file system), so Spark is better suited for iterative algorithms such as data mining and machine learning. In rece...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06F9/50G06F9/455G06F9/48H04L67/10
CPCG06F9/5027G06F9/5038G06F9/45558G06F9/4881H04L67/10G06F2009/45595G06F2209/484G06F2209/5021Y02D10/00
Inventor 钮艳杜翠兰赵淳璐李扬曦项菲李鹏霄佟玲玲张丽王祥井雅琪
Owner NAT COMP NETWORK & INFORMATION SECURITY MANAGEMENT CENT
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products