Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Automated design method, device and optimization method applied for neural network processor

A technology of neural network and design method, applied in biological neural network model, electrical digital data processing, CAD circuit design, etc., can solve the problems of large circuit scale, high energy consumption, and long casting cycle

Active Publication Date: 2017-08-04
INST OF COMPUTING TECH CHINESE ACAD OF SCI
View PDF3 Cites 105 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] At present, the real-time task analysis using deep neural networks mostly relies on large-scale high-performance processors or general-purpose graphics processors. These devices have high cost and high power consumption. When they are applied to portable smart devices, they have large circuit scale, high energy consumption and expensive products. Therefore, for the application of energy-efficient real-time processing in embedded devices and small and low-cost data centers, it is more effective to use dedicated neural network processors to accelerate neural network model calculations instead of software. However, the topology and parameter design of the neural network model will change according to different application scenarios. In addition, the development and change of the neural network model is very fast, providing a network that can face various application scenarios and cover various neural network models. It is very difficult to use a general-purpose high-efficiency neural network processor, which brings great changes for high-level application developers to design hardware acceleration solutions for different application requirements
[0004] At present, the existing neural network hardware acceleration technology includes two methods: Application Specific Integrated Circuit (ASIC) chip and Field Programmable Gate Array (Field Programmable Gate Array, FPGA). Under the same process conditions, the ASIC chip runs faster And the power consumption is low, but the design process is complicated, the casting cycle is long, the development cost is high, and it cannot adapt to the characteristics of the rapid update of the neural network model; FPGA has the characteristics of flexible circuit configuration and short development cycle, but the running speed is relatively low, and the hardware overhead and The power consumption is relatively high. No matter which of the above hardware acceleration technologies is adopted, the neural network model and algorithm developers need to understand the network topology and data flow mode and master the hardware development technology at the same time, including processor architecture design, hardware code writing, and simulation. Verification, layout and wiring, etc. These technologies are more difficult for high-level application developers who focus on researching neural network models and structural design without hardware design capabilities. Therefore, in order to enable high-level developers to efficiently implement neural networks It is very urgent to provide an automatic design method and tool for neural network processors for various neural network models.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Automated design method, device and optimization method applied for neural network processor
  • Automated design method, device and optimization method applied for neural network processor
  • Automated design method, device and optimization method applied for neural network processor

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0039] In order to make the object, technical solution, design method and advantages of the present invention clearer, the present invention will be further described in detail through specific embodiments in conjunction with the accompanying drawings. It should be understood that the specific embodiments described here are only used to explain the present invention, and It is not intended to limit the invention.

[0040] The present invention aims to provide an automatic design method, device and optimization method suitable for neural network processors. The device includes a hardware generator and a compiler, and the hardware generator can The hardware description language code of the neural network processor is automatically generated, and then the designer uses the existing hardware circuit design method to generate the processor hardware circuit through the hardware description language; the compiler can generate control and data scheduling instructions according to the n...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an automated design method, device and optimization method applied for a neural network processor. The method comprises the steps that neural network model topological structure configuration files and hardware resource constraint files are obtained, wherein the hardware resource constraint files comprise target circuit area consumption, target circuit power consumption and target circuit working frequency; a neural network processor hardware architecture is generated according to the neural network model topological structure configuration files and the hardware resource constraint files, and hardware architecture description files are generated; according to a neural network model topological structure, the hardware resource constraint files and the hardware architecture description files, modes of data scheduling, storage and calculation are optimized, and corresponding control description files are generated; according to the hardware architecture description files and the control description files, cell libraries meet the design requirements are found in constructed reusable neural network cell libraries, corresponding control logic and a corresponding hardware circuit description language are generated, and the hardware circuit description language is transformed into a hardware circuit.

Description

technical field [0001] The invention relates to the technical field of neural network processor architecture, in particular to an automatic design method, device and optimization method applicable to neural network processors. Background technique [0002] With the rapid development of related technologies in the field of artificial intelligence, deep learning, as an interdisciplinary product of computer science and life science, has excellent performance in solving advanced abstract cognitive problems, so it has become a research hotspot in academia and industry. In order to improve the computing performance of the neural network and adapt to more complex application problems, the scale of the neural network is constantly expanding, and the amount of calculation, data volume and computing energy consumption are also increasing. Searching for neural network computing methods and devices with high performance and low energy consumption has become a hot spot for researchers ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F17/50G06N3/02
CPCG06F30/30G06N3/02
Inventor 韩银和许浩博王颖
Owner INST OF COMPUTING TECH CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products