Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Deep learning memory management method and system based on Tensor access

A memory management and deep learning technology, applied in the field of deep learning memory management methods and systems, can solve problems such as insufficient memory, and achieve the effect of effective management and memory

Active Publication Date: 2021-02-02
ZHEJIANG LAB +1
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

That is, by deciding on Tensor to make certain decisions in deep learning, the problem of insufficient memory can be solved and efficient deep learning training performance can be obtained at the same time.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep learning memory management method and system based on Tensor access
  • Deep learning memory management method and system based on Tensor access
  • Deep learning memory management method and system based on Tensor access

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0084] In one configuration Tesla V100GPU, the video memory is 32GB. The CPU model is Intel® Xeon® Gold 6126 CPU@2.60GHz, the operating system is Ubuntu 18.04.3 LTS, the CUDA Toolkit version is 9.0, and the pytorch version is 1.5. Existing Capuchin deep learning memory management method and the present invention Method comparison.

[0085] The training of two optimization methods is performed on the vgg16 network, and the results are as follows image 3 As shown, it can be found from the training speed that the method of the present invention is better than capuchin in all batchsize cases. In the optimization of memory occupation, the maximum batch size supported by the method of the present invention is 5500, while the maximum batch size of capuchin is only 4000.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a deep learning memory management method based on Tensor access, and the method comprises the steps: collecting the execution information of a neural network and the performanceinformation of a hardware platform, obtaining the memory space expenditure and time expenditure under a related decision, building an integer linear programming model, solving an optimal Tensor scheduling strategy through the optimization under a constraint condition, and obtaining the memory space expenditure and time expenditure under the related decision, so that the problem of memory insufficiency is solved, and high deep learning training performance is obtained. Compared with the prior art, under the same hardware performance, larger batchsize neural network training can be realized. The invention further provides a memory management system. The memory management system comprises a profile module, a decision module and an execution module. The system can be directly added to a deeplearning framework and is convenient to use.

Description

technical field [0001] The present invention relates to the field of computer science and artificial intelligence, in particular to a Tensor-based deep learning memory management method and system. Background technique [0002] The innovation of deep learning technology has greatly promoted the development of computer vision, natural language processing, medicine and other fields. Wider and wider network structures have higher accuracy in training, so it has become a trend for the deep learning community to adopt large and wide networks. However, the storage capacity of deep learning accelerators represented by GPUs is very limited and cannot accommodate a large amount of data in the training process of deep learning models, which seriously restricts the development of deep learning technology. The larger and deeper the network structure requires more memory usage, it is of great value to solve the problem of insufficient memory for deep learning training. Existing solutio...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/50G06N3/04G06N3/08
CPCG06F9/5016G06N3/08G06N3/045
Inventor 何水兵陈帅犇陈平杨斯凌陈伟剑孙贤和陈刚银燕龙毛旷
Owner ZHEJIANG LAB
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products