Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Data distributed operation method and device, storage medium and processor

An operating method and operating device technology, applied in the field of communications, can solve problems such as the inability to specify specific equipment in the code, and achieve the effect of improving the efficiency of the system algorithm

Inactive Publication Date: 2019-07-16
ZTE CORP
View PDF5 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] Embodiments of the present invention provide a data distributed operation method and device, a storage medium, and a processor, so as to at least solve the problems caused by dynamic equipment allocation when users write distributed codes in a large-scale cloud environment in related technologies. Issues where you cannot specify a specific device in your code

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Data distributed operation method and device, storage medium and processor
  • Data distributed operation method and device, storage medium and processor
  • Data distributed operation method and device, storage medium and processor

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0023] In this embodiment, a method for distributed operation of data is provided, figure 1 is a flowchart of a method for distributed operation of data according to an embodiment of the present invention, such as figure 1 As shown, the process includes the following steps:

[0024] Step S102, generating a directed acyclic graph DAG from the stand-alone script submitted by the user, wherein the DAG includes a plurality of operation instances OP;

[0025] Step S104, according to the generated DAG, and the graphics processor GPU resource request submitted by the user and the system GPU resource, the OP is split to obtain multiple sub-OPs;

[0026] Step S106, according to the calculation loss of each sub-OP, place multiple sub-OPs on different computing nodes to divide the multiple sub-OPs into multiple layers, and run the sub-OPs in parallel on the computing nodes of multiple layers; wherein, the current layer The calculation loss of the sub-OP is smaller than the calculation ...

Embodiment 2

[0061] In this embodiment, a device for distributed data operation is also provided, which is used to implement the above embodiments and preferred implementation modes, and what has already been described will not be repeated. As used below, the term "module" may be a combination of software and / or hardware that realizes a predetermined function. Although the devices described in the following embodiments are preferably implemented in software, implementations in hardware, or a combination of software and hardware are also possible and contemplated.

[0062] Figure 5 is a structural block diagram of a data distributed operation device according to an embodiment of the present invention, such as Figure 5 As shown, the device includes:

[0063] The generation module 52 is used to generate a directed acyclic graph DAG from the stand-alone script submitted by the user, wherein the DAG includes a plurality of operation instances OP;

[0064] The splitting module 54 is coupled...

Embodiment 3

[0073] In this embodiment, according to the GPU resources of the platform cluster, OP scheduling and OP diagram dismantling are performed on the user's training script, and automatic distributed parallel running of training tasks is realized in the cloud environment, so as to achieve the intelligence of the user's deep learning training tasks. Perform high-concurrency and high-performance operations.

[0074] In general: generate a DAG calculation graph from the stand-alone script submitted by the user, and then generate a parallel scheme according to the resource request submitted by the user and the system GPU resource characteristics, realize automatic distributed parallel operation in this system, and reduce the difficulty of algorithm development .

[0075]Among them, when generating an automated parallel solution, OP is split and the calculation loss of each OP is calculated, and each OP is placed on a suitable GPU for execution, so as to achieve the balance of computing...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a data distributed operation method and device, a storage medium and a processor, wherein the method comprises the steps of enabling a single-machine script submitted by a userto generate a directed acyclic graph DAG, wherein the DAG comprises a plurality of operation instances OPs; splitting the OP according to the generated DAG, the GPU resource request submitted by the user and the system GPU resource to obtain a plurality of sub-OPs; placing the plurality of sub-OPs on different computing nodes according to the computing loss of each sub-OP to divide the plurality of sub-OPs into a plurality of layers, and running the sub-OPs on the computing nodes of the plurality of layers in parallel, wherein the calculation loss of the sub-OP of the current layer is smallerthan the calculation loss of other sub-OPs on the current layer. Through the present invention, the problem that the specific equipment cannot be specified in the codes due to dynamic distribution ofthe equipment when a user writes the distributed codes in a large-scale cloud environment in the prior art is solved, and the algorithm efficiency of the system is improved.

Description

technical field [0001] The present invention relates to the communication field, in particular, to a data distributed operation method and device, a storage medium and a processor. Background technique [0002] Artificial intelligence is one of the most cutting-edge technologies in the 21st century, and deep learning is the most effective and effective implementation method to realize artificial intelligence, and it is the hottest branch of machine learning at present. [0003] The model training of deep learning algorithms places extremely high demands on computing power. For example, FaceNet, a face model developed by Google, contains 140 million parameters, and one inference will cost 1.6 billion floating-point operations. In order to improve computing power, one is to increase single-point computing power, such as using hardware such as Graphic Processing Unit (GPU for short), Field Programmable Gate Array (Field Programmable Gate Array, FPGA for short) to accelerate com...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F8/30G06N20/00
CPCG06F8/30G06N20/00
Inventor 陈秀玲周祥生屠要峰黄震江高洪
Owner ZTE CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products