Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

AI calculation configuration method and device, equipment and storage medium

A configuration method and computing graph technology, applied in the field of deep learning, can solve problems affecting work efficiency, time-consuming and laborious, etc., and achieve the effect of improving efficiency, improving performance, and taking into account both computing speed and efficiency

Pending Publication Date: 2020-10-09
SHENZHEN CORERAIN TECH CO LTD
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, at present, the codes for AI computing configuration operations are usually written by technicians based on the specified custom chips. When the hardware architecture of the custom chips changes, technicians need to rewrite the codes for AI computing configuration operations, which is time-consuming and laborious. also affect work efficiency

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • AI calculation configuration method and device, equipment and storage medium
  • AI calculation configuration method and device, equipment and storage medium
  • AI calculation configuration method and device, equipment and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0056] figure 1 The schematic flow chart of the AI ​​computing configuration method provided in Embodiment 1 of the present invention is applicable to the separation of deep learning model computing graphs based on the data flow architecture. The method can be implemented by an AI computing configuration device, and can be realized by means of hardware or software . Such as figure 1 As shown, the AI ​​computing configuration method provided by Embodiment 1 of the present invention includes:

[0057] S110. Obtain a computing graph based on the data flow architecture, where the computing graph includes multiple computing nodes.

[0058] Specifically, the calculation graph based on the data flow architecture refers to the calculation graph of the deep learning model developed based on the data flow architecture. Computational graph is a kind of computing process with directed acyclic graph as data structure, which includes multiple computing nodes, each computing node represen...

Embodiment 2

[0068] figure 2 It is a schematic flow chart of the AI ​​computing configuration method provided by Embodiment 2 of the present invention, and this embodiment is a further optimization of the foregoing embodiment. Such as figure 2 As shown, the AI ​​computing configuration method provided by Embodiment 2 of the present invention includes:

[0069] S210. Acquire a computing graph based on the data flow architecture, where the computing graph includes multiple computing nodes.

[0070] S220. Determine a node type of the computing node.

[0071] S230. Determine a corresponding executable device according to the node type.

[0072] S240. When there are multiple executable devices, determine an execution time required by each executable device to execute the computing node.

[0073] S250. Determine the execution condition of each executable device.

[0074] Specifically, the execution conditions of the executable device refer to the conditions under which the computing chip ...

Embodiment 3

[0084] Figure 3A It is a schematic flow chart of the AI ​​computing configuration method provided by Embodiment 3 of the present invention, and this embodiment is a further optimization of the foregoing embodiments. Such as Figure 3A As shown, the AI ​​computing configuration method provided by Embodiment 3 of the present invention includes:

[0085] S301. Acquire a computing graph based on a data flow architecture, where the computing graph includes multiple computing nodes.

[0086] S302. Determine a node type of the computing node.

[0087] S303. Determine a corresponding executable device according to the node type.

[0088] S304. When there are multiple executable devices, determine the execution time required by each executable device to execute the computing node.

[0089] S305. Determine the execution condition of each executable device.

[0090] S306. Determine the optimal execution device according to the execution conditions and execution time of each executabl...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention discloses an AI calculation configuration method and device, equipment and a storage medium. The method comprises the steps: acquiring a computing graph based on data flow architecture, and enabling the calculation graph to comprise a plurality of calculation nodes; determining a node type of the computing node; determining corresponding executable equipment according to the node type; and allocating the computing node to an executable device for execution. According to the embodiment of the invention, the executable device of the computing node is allocated according to the node type; the number of the computing nodes in the calculation graph does not influence the separation work; automatic separation of the calculation graph of the deep learning model based on the data flow architecture is realized, the efficiency of AI calculation configuration operation is improved, and the executable device with faster calculation is further selected according to the execution time of the executable device, so that the calculation speed and efficiency are both considered on the premise of realizing AI calculation configuration, and the performance of the computing graph is improved.

Description

technical field [0001] The embodiments of the present invention relate to the technical field of deep learning, and in particular to an AI computing configuration method, device, device, and storage medium. Background technique [0002] With the development of deep learning technology, the demand for computing power in deep learning is getting higher and higher. Therefore, how to solve the increasing demand for computing power is a major challenge in deep learning technology. Compared with traditional instruction set architecture chips, the currently developed computing chips with data flow architecture can meet greater computing power requirements, and have the characteristics of high efficiency and low latency, so they are deeply valued by people. [0003] In order to further improve the computing power, customized computing chips can also be used, so that deep learning custom chips based on data flow architecture have appeared. Although this kind of custom chip has power...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/50
CPCG06F9/5044Y02D10/00
Inventor 邹伟熊超蔡权雄牛昕宇
Owner SHENZHEN CORERAIN TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products