Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Deep learning memory management method and system based on tensor access

A technology of memory management and deep learning, which is applied in the field of deep learning memory management methods and systems, can solve problems such as insufficient memory, and achieve the effect of effective management and realization of memory

Active Publication Date: 2021-04-27
ZHEJIANG LAB +1
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

That is, by deciding on Tensor to make certain decisions in deep learning, the problem of insufficient memory can be solved and efficient deep learning training performance can be obtained at the same time.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep learning memory management method and system based on tensor access
  • Deep learning memory management method and system based on tensor access
  • Deep learning memory management method and system based on tensor access

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0084] In one configuration Tesla V100GPU, the video memory is 32GB. The CPU model is Intel® Xeon® Gold 6126CPU@2.60GHz, the operating system is Ubuntu 18.04.3 LTS, the CUDA Toolkit version is 9.0, and the pytorch version is 1.5. The existing Capuchin deep learning memory management method and the method of the present invention are performed on the machine Comparison.

[0085] The training of two optimization methods is performed on the vgg16 network, and the results are as follows image 3 As shown, it can be found from the training speed that the method of the present invention is better than capuchin in all batchsize cases. In the optimization of memory occupation, the maximum batch size supported by the method of the present invention is 5500, while the maximum batch size of capuchin is only 4000.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention provides a deep learning memory management method based on Tensor access. The method obtains the memory space overhead and time overhead under relevant decisions by collecting the execution information of the neural network and the performance information of the hardware platform, and establishes an integer linear programming model. By optimizing and solving the optimal Tensor scheduling strategy under constraints, it can solve the problem of insufficient memory and obtain high deep learning training performance. Compared with the prior art, under the same hardware performance, the present invention can realize neural network training with a larger batch size. The present invention also proposes a memory management system, including a profile module, a decision-making module and an execution module; the system can be directly added to the deep learning framework and is easy to use.

Description

technical field [0001] The present invention relates to the field of computer science and artificial intelligence, in particular to a Tensor-based deep learning memory management method and system. Background technique [0002] The innovation of deep learning technology has greatly promoted the development of computer vision, natural language processing, medicine and other fields. Wider and wider network structures have higher accuracy in training, so it has become a trend for the deep learning community to adopt large and wide networks. However, the storage capacity of deep learning accelerators represented by GPUs is very limited and cannot accommodate a large amount of data in the training process of deep learning models, which seriously restricts the development of deep learning technology. The larger and deeper the network structure requires more memory usage, it is of great value to solve the problem of insufficient memory for deep learning training. Existing solutio...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F9/50G06N3/04G06N3/08
CPCG06F9/5016G06N3/08G06N3/045
Inventor 何水兵陈帅犇陈平杨斯凌陈伟剑孙贤和陈刚银燕龙毛旷
Owner ZHEJIANG LAB
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products