Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

High-parallelism computing system and instruction scheduling method thereof

A computing system and a technology for computing instructions, which are applied to a high-parallel computing system and its instruction scheduling method and the corresponding compiling field, can solve the problems of increasing the computing scale of neural network models, increasing the complexity, and failing to meet practical requirements.

Active Publication Date: 2020-01-07
XILINX INC
View PDF9 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In recent years, the neural network model has shown a trend of increasing computing scale and increasing complexity. The use of traditional CPU platforms has been unable to meet its practical requirements.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • High-parallelism computing system and instruction scheduling method thereof
  • High-parallelism computing system and instruction scheduling method thereof
  • High-parallelism computing system and instruction scheduling method thereof

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] Hereinafter, preferred embodiments of the present disclosure will be described in more detail with reference to the accompanying drawings. Although the drawings show preferred embodiments of the present disclosure, it should be understood that the present disclosure can be implemented in various forms and should not be limited by the embodiments set forth herein. On the contrary, these embodiments are provided to make the present disclosure more thorough and complete, and to fully convey the scope of the present disclosure to those skilled in the art.

[0037] Artificial intelligence has developed rapidly in recent years, and has achieved good application effects in the fields of image classification, detection, video and voice processing, and still has great development prospects. Neural network is the core of artificial intelligence applications, and deep learning neural network algorithm is one of the most common neural network models. The workload characteristics of n...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a high-parallelism computing system and an instruction scheduling method thereof. The computing system comprises an instruction reading and distributing module used for readinga plurality of types of instructions according to a specific sequence and distributing the obtained instructions to corresponding function modules according to the types; an internal cache which is used for caching data and instructions required for executing calculation; a plurality of functional modules which are used for sequentially executing the instructions of the category distributed by theinstruction reading and distributing module and reading required data from the internal cache, wherein the specific sequence is obtained by carrying out topological sorting on the instructions according to a directed acyclic graph formed by categories and dependency relationships. Therefore, the directed acyclic graph constructed according to the instruction category and the dependency relationship is subjected to topological sorting to perform value taking, and deadlock caused by instruction dependency can be avoided through relatively simple operation. Preferably, the sorting can be realized in an instruction compiling stage, so that the resource consumption of the computing system is further reduced.

Description

Technical field [0001] The present invention relates to the field of high-parallelism computing, and more specifically, to a high-parallelism computing system and its instruction scheduling method and corresponding compilation method. Background technique [0002] Neural Network has become a research hotspot in the field of image recognition in recent years. The trained neural network model can be used in many fields such as image classification, object recognition and saliency detection. In recent years, the neural network model has shown a trend of increasing computing scale and increasing complexity. The use of traditional CPU platforms can no longer meet its practical requirements. Therefore, the use of FPGA, GPU, ASIC and other heterogeneous computing platforms for neural network accelerator design has become a new research focus. Among them, compared to GPU platforms, FPGAs and ASICs can achieve more flexible hardware architectures and higher computing energy efficiency r...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F9/38G06N3/04G06N3/08G06N3/063
CPCG06N3/08G06N3/063G06F9/3814G06F9/3889G06N3/045
Inventor 于谦隋凌志方绍峡王俊斌单羿
Owner XILINX INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products