Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

MPI-based neural network architecture search parallelization method and device

A neural network and layered structure technology, applied in the field of neural network search parallelization, can solve the problems that the local IO processing cannot be greatly accelerated, GPU computing power and video memory are difficult, and achieve easy expansion, simple expansion, and improved efficiency Effect

Active Publication Date: 2020-06-23
中科弘云科技(北京)有限公司
View PDF5 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Stand-alone deep neural network training is limited by the fact that local IO processing cannot be greatly accelerated, and it is difficult to further improve GPU computing power and video memory.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • MPI-based neural network architecture search parallelization method and device
  • MPI-based neural network architecture search parallelization method and device
  • MPI-based neural network architecture search parallelization method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0044] The embodiment of the present invention provides an MPI-based neural network architecture search parallelization method, the method includes the following steps (such as Figure 16 shown): S101: Start a plurality of MPI processes according to the number of GPUs in the current multi-machine environment, and arrange them in order, wherein the multi-machine environment includes a plurality of machine nodes, and each node in the plurality of machine nodes Including multiple GPUs and multiple MPI task processes, the MPI task process performs neural network architecture search training according to the input parameters; S102: The started MPI process reads data from a specified position in the training set according to its own serial number, and Perform gradient calculation; S103: multiple GPUs of each node perform gradient reduction calculations according to the hierarchical structure, and aggregate the calculation results to the first GPU among the multiple GPUs; S104: the fi...

Embodiment 2

[0090] According to an embodiment of the present invention, the present invention provides an MPI-based neural network architecture search parallelization device, such as Figure 17 As shown, it includes: a memory 10, a processor 12, and a computer program stored on the memory 10 and operable on the processor 12. When the computer program is executed by the processor 12, the above embodiment 1 is realized. The steps of the MPI-based neural network architecture search parallelization method described in .

Embodiment 3

[0092] According to an embodiment of the present invention, the present invention provides a computer-readable storage medium, and the computer-readable storage medium stores a program for implementing information transmission, and when the program is executed by a processor, the above-mentioned embodiment 1 is implemented. Steps of the MPI-based neural network architecture search parallelization method.

[0093] Through the above descriptions about the implementation manners, those skilled in the art can clearly understand that the present application can be realized by software and necessary general-purpose hardware, and of course it can also be realized by hardware. Based on this understanding, the essence of the technical solution of this application or the part that contributes to related technologies can be embodied in the form of software products, and the computer software products can be stored in computer-readable storage media, such as computer floppy disks, Read-on...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an MPI-based neural network architecture search parallelization method and device. The method comprises the steps that MPI processes are started according to the number of GPUs in a current multi-computer environment and arranged in sequence; the started MPI process reads data from a designated position in a training set according to the serial number of the started MPI process, and gradient calculation is carried out; the GPU of each node performs gradient specification calculation according to the hierarchical structure, and summarizes calculation results to the first GPU in the GPUs; the first GPU performs gradient full-specification calculation according to an annular structure; starting from the first GPU in each node, broadcasting a gradient calculation result according to a hierarchical structure; and updating the weight and the bias value of the neural network by using the new gradient value. On the basis of guaranteeing the recognition rate of the neural network architecture search result model, the neural network architecture search training efficiency can be effectively improved, the training time is greatly shortened, and therefore the efficiency of the automatic deep learning process is improved.

Description

technical field [0001] The invention relates to the technical field of neural network search parallelization, in particular to an MPI-based neural network architecture search parallelization method and device. Background technique [0002] At present, discovering an efficient neural network architecture requires a considerable workload of deep learning experts, and it is necessary to manually build a suitable neural network architecture according to different directions. This working mode consumes a lot of energy and time of deep learning practitioners. At present, in order to solve this problem, a variety of methods for automatically searching the neural network architecture have been proposed. The algorithms with better performance include reinforcement learning and evolutionary learning. Since they are all searched in discrete spaces, they require a huge amount of calculation. More than 1,000 days of GPU working time. Some scholars have also proposed a differentiable neu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/245G06N3/04G06N3/08
CPCG06F16/24569G06N3/084G06N3/045Y02D10/00
Inventor 曹连雨
Owner 中科弘云科技(北京)有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products