Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Systems and methods for block-sparse recurrent neural networks

A technology of neural network model and implementation method, applied in the field of computer learning system, can solve problems such as irregularity, inability to utilize array data paths, and ineffective utilization of hardware resources in sparse format

Pending Publication Date: 2019-05-07
BAIDU USA LLC
View PDF11 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although sparse operations require less computation and memory relative to their dense counterparts, the observed speedup using sparse operations is smaller than expected on different hardware platforms
Sparse formats do not use hardware resources efficiently due to storage overhead, irregular memory accesses, and inability to take advantage of array data paths in modern processors

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Systems and methods for block-sparse recurrent neural networks
  • Systems and methods for block-sparse recurrent neural networks
  • Systems and methods for block-sparse recurrent neural networks

Examples

Experimental program
Comparison scheme
Effect test

Embodiment approach

[0132] In embodiments, aspects of this patent document may relate to, may comprise, or may be implemented within one or more information handling systems / computing systems realized. Computing systems may include devices operable to compute, calculate, determine, classify, process, transmit, receive, retrieve, create, route, switch, store, display, communicate, authenticate, detect, record, reproduce, process or any tool or aggregation of tools utilizing information, intelligence or data of any kind. For example, a computing system can be or include a personal computer (e.g., a laptop), a tablet computer, a phablet, a personal digital assistant (PDA), a smart phone, a smart watch, a smart bag, a server (e.g., a blade server or rack server), network storage device, camera or any other suitable device and may vary in size, shape, performance, functionality and price. A computing system may include random access memory (RAM), one or more processing resources (such as a central p...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

Described herein are systems and methods to prune deep neural network models in reducing the overall memory and compute requirements of these models. It is demonstrated that using block pruning and group lasso combined with pruning during training, block-sparse recurrent neural networks (RNNs) may be built as accurate as dense baseline models. Two different approaches are disclosed to induce blocksparsity in neural network models: pruning blocks of weights in a layer and using group lasso regularization to create blocks of weights with zeros. Using these techniques, it is demonstrated that block-sparse RNNs with high sparsity can be created with small loss in accuracy. Block-sparse RNNs eliminate overheads related to data storage and irregular memory accesses while increasing hardware efficiency compared to unstructured sparsity.

Description

technical field [0001] The present invention generally relates to systems and methods for computer learning that provide improved computer performance, features and usage. Background technique [0002] Recurrent Neural Networks (RNNs) are state-of-the-art models used in fields such as speech recognition, machine translation, language modeling, and more. Sparsity is a technique for reducing the computational and memory requirements of deep learning models. Sparse RNNs are easier to deploy on devices and high-end server processors. Although sparse operations require less computation and memory relative to their dense counterparts, the observed speedup using sparse operations is smaller than expected on different hardware platforms. Sparse formats do not utilize hardware resources efficiently due to storage overhead, irregular memory accesses, and inability to take advantage of array data paths in modern processors. [0003] Accordingly, systems and methods for neural networ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F11/14G06N3/08
CPCG06N3/082G06N3/044G06N3/045G06N3/04
Inventor 沙兰·纳朗埃里克·昂德桑德格雷戈瑞·戴莫萨斯
Owner BAIDU USA LLC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products