Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Methods and apparatus for localized processing within multicore neural networks

a neural network and localization technology, applied in the field of neural network processing, can solve problems such as many inefficiencies in implementations

Pending Publication Date: 2022-01-13
FEMTOSENSE INC
View PDF0 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

The present patent is about methods and devices for processing neural networks in multicore network processors. The technical effects of this patent include improved performance and efficiency in processing neural networks across multiple cores, as well as improved sparsity and localization of neural network processing. These improvements can lead to faster and more accurate results in neural network processing.

Problems solved by technology

Unfortunately, such implementations suffer from many inefficiencies due to e.g., hardware limitations (e.g., physical connectivity), compiler design, and / or instruction scheduling.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Methods and apparatus for localized processing within multicore neural networks
  • Methods and apparatus for localized processing within multicore neural networks
  • Methods and apparatus for localized processing within multicore neural networks

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032]In the following detailed description, reference is made to the accompanying drawings which form a part hereof wherein like numerals designate like parts throughout, and in which is shown, by way of illustration, embodiments that may be practiced. It is to be understood that other embodiments may be utilized, and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents.

[0033]Aspects of the disclosure are disclosed in the accompanying description. Alternate embodiments of the present disclosure and their equivalents may be devised without departing from the spirit or scope of the present disclosure. It should be noted that any discussion herein regarding “one embodiment”, “an embodiment”, “an exemplary embodiment”, and the like indicate that the embodiment described...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

Methods and apparatus for localized processing within multicore neural networks. Unlike existing solutions that rely on commodity software and hardware to perform “brute force” large scale neural network processing the various techniques described herein map and partition a neural network into the hardware limitations of a target platform. Specifically, the various implementations described herein synergistically leverage localization, sparsity, and distributed scheduling, to enable neural network processing within embedded hardware applications. As described herein, hardware-aware mapping / partitioning enhances neural network performance by e.g., avoiding pin-limited memory accesses, processing data in compressed formats / skipping unnecessary operations, and decoupling scheduling between cores.

Description

PRIORITY[0001]This application claims the benefit of priority to U.S. Provisional Patent Application Ser. No. 63 / 050,090 filed Jul. 9, 2020 and entitled “METHODS AND APPARATUS FOR LOCALIZED PROCESSING WITHIN MULTICORE NEURAL NETWORKS”, which is incorporated herein by reference in its entirety.RELATED APPLICATIONS[0002]This application is related to U.S. patent application Ser. No. ______, filed and entitled “METHODS AND APPARATUS FOR MATRIX AND VECTOR STORAGE AND OPERATIONS”, and U.S. patent application Ser. No. ______, filed and entitled “METHODS AND APPARATUS FOR THREAD-BASED SCHEDULING IN MULTICORE NEURAL NETWORKS”, each of which are incorporated herein by reference in its entirety.STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT[0003]This invention was made with Government support under Agreement No. N00014-19-9-0003, awarded by ONR. The Government has certain rights in the invention.COPYRIGHT[0004]A portion of the disclosure of this patent document contains mater...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(United States)
IPC IPC(8): G06N3/063G06N3/04
CPCG06N3/063G06N3/0481G06N3/0454G06F17/16H03M7/3082H03M7/6023G06F9/3001G06F9/30036G06F9/3016G06N3/08G06F9/3851G06F9/3802G06F9/3818G06F9/3838G06F11/3024G06F11/3433G06F16/901G06N3/10H03M7/702G06F15/7807G06F9/3836G06F9/3885G06F9/48G06N3/045G06N3/048G06F15/16G06N3/098
Inventor FOK, SAM BRIANNECKAR, ALEXANDER SMITHREID, SCOTT HENRY
Owner FEMTOSENSE INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products