Unlock instant, AI-driven research and patent intelligence for your innovation.

Execution method, execution device, learning method, learning device, and program for deep neural network

A technology of deep neural network and execution method, applied in the fields of execution, execution device, learning, learning device and program for deep neural network, can solve problems such as low power consumption deployment and multi-processing load, achieve low processing load, reduce Effects of small accuracy reductions

Active Publication Date: 2020-01-07
PANASONIC INTELLECTUAL PROPERTY CORP OF AMERICA
View PDF11 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the memory and computation required by DNNs can place too much processing load on low-power deployments

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Execution method, execution device, learning method, learning device, and program for deep neural network
  • Execution method, execution device, learning method, learning device, and program for deep neural network
  • Execution method, execution device, learning method, learning device, and program for deep neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0026] An execution method for a DNN according to an aspect of the present disclosure includes obtaining a binary intermediate feature map in a binary representation by converting the floating-point or fixed-point intermediate feature map into a binary vector during deep neural network inference using a first transformation module ; generate a compressed feature map by compressing the binary intermediate feature map using a nonlinear dimensionality reduction layer; store the compressed feature map into memory; pair the compressed featuremap read from memory by using a reconstruction layer corresponding to the nonlinear dimensionality reduction layer Decompression is performed to reconstruct the binary intermediate feature map; the reconstructed binary intermediate feature map is converted to a floating-point or fixed-point intermediate feature map using a second transformation module.

[0027] In this way, DNNs can be executed with very little memory usage compared to tradition...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

Executing a deep neural network by obtaining, during deep neural network inference, a binary intermediate feature map in binary representation by converting a floating-point or fixed-point intermediate feature map into a binary vector using a first transformation module (S210, S215); generating a compressed feature map by compressing the binary intermediate feature map using a nonlinear dimensionality reduction layer (S220); storing the compressed feature map into memory; reconstructing the binary intermediate feature map by decompressing the compressed feature map read from the memory using areconstruction layer corresponding to the nonlinear dimensionality reduction layer (S240); and converting the reconstructed binary intermediate feature map into a floating-point or fixed-point intermediate feature map using a second transformation module (S245, S250).

Description

technical field [0001] The present disclosure relates, for example, to an implementation method for a deep neural network. Background technique [0002] Recent achievements in deep neural networks (hereafter referred to as DNNs) have made them an attractive choice for many computer vision applications, including image classification and object detection. However, the memory and computation required by DNNs may impose too much processing load for low-power deployments. [0003] Proposed or suggested methods to reduce such processing load include: layer fusion, where the computation of layers can be fused without storing intermediate feature maps in memory; and quantized feature map values ​​using non-linear dimensionality reduction layers. Compression (see Non-Patent Literature (NPL) 1 and 2). [0004] reference list [0005] non-patent literature [0006] 【NPL 1】 [0007] M.Alwani, H.Chen, M.Ferdman, and P.A. Milder; Fused-layer CNN accelerators; Micro; pp. 1-12; Octobe...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/02G06N3/04G06N3/06G06N3/08
CPCG06N3/084G06N3/063G06N3/048G06N3/045G06N3/04G10L15/22
Inventor D·A·古多夫斯基L·里加齐奥
Owner PANASONIC INTELLECTUAL PROPERTY CORP OF AMERICA