Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

System and method to reduce weight storage bits for deep-learning network

A deep learning network, storage bit technology, applied in the field of reducing the weight storage bit of deep learning network, can solve problems such as incredible accuracy level

Active Publication Date: 2018-07-17
SAMSUNG ELECTRONICS CO LTD
View PDF8 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

For example, software implementations can enjoy double-precision (64-bit) calculations, but when hardware constraints such as physical size and power consumption are considered, this level of precision becomes implausible

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • System and method to reduce weight storage bits for deep-learning network
  • System and method to reduce weight storage bits for deep-learning network
  • System and method to reduce weight storage bits for deep-learning network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0018] In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. However, it will be understood by those skilled in the art that the disclosed aspects may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to obscure the subject matter disclosed herein.

[0019] Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment disclosed herein. Thus, appearances of the phrase "in one embodiment" or "in an embodiment" or "according to an embodiment" (or other phrases of similar meaning) in various places throughout this specification may not all be referring to the same embodiment. . Furthermore, the particular f...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A system and method to reduce weight storage bits for a deep-learning network includes a quantizing module and a cluster-number reduction module. The quantizing module quantizes neural weights of eachquantization layer of the deep-learning network. The cluster-number reduction module reduces the predetermined number of clusters for a layer having a clustering error that is a minimum of the clustering errors of the plurality of quantization layers. The quantizing module requantizes the layer based on the reduced predetermined number of clusters for the layer and the cluster-number reduction module further determines another layer having a clustering error that is a minimum of the clustering errors of the plurality of quantized layers and reduces the predetermined number of clusters for theanother layer until a recognition performance of the deep-learning network has been reduced by a predetermined threshold.

Description

[0001] This application claims the benefit of priority to U.S. Provisional Patent Application No. 62 / 444,352, filed January 9, 2017, the disclosure of which is hereby incorporated by reference in its entirety. technical field [0002] The subject matter disclosed herein relates generally to deep learning networks, and more particularly, to an apparatus and method for reducing weight memory bits of a deep learning network. Background technique [0003] Deep learning is a widely used technique in the fields of artificial intelligence (AI) and computer vision. Various deep learning architectures such as convolutional neural networks (CNNs), deep belief networks (DBNs), and autoencoders have been shown to be effective for tasks such as visual object recognition, automatic speech recognition, natural language processing, and music / audio Signal Processing) yields state-of-the-art results. Major efforts in deep learning have focused on software implementations for various network ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/04G06N3/08G06K9/62
CPCG06N3/08G06N3/045G06F18/23213G06N3/063G06N3/0495
Inventor 冀正平约翰·韦克菲尔德·布拉泽斯
Owner SAMSUNG ELECTRONICS CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products