Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Highly Parallel Computing System and Its Instruction Scheduling Method

A computing system and a technology for computing instructions, which are applied to a high-parallel computing system and its instruction scheduling method and the corresponding compiling field, can solve problems such as increased complexity, increased computing scale of neural network models, and inability to meet practical requirements, and achieve The effect of reducing resource consumption and improving parallelism

Active Publication Date: 2022-04-08
XILINX INC
View PDF9 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In recent years, the neural network model has shown a trend of increasing computing scale and increasing complexity. The use of traditional CPU platforms has been unable to meet its practical requirements.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Highly Parallel Computing System and Its Instruction Scheduling Method
  • Highly Parallel Computing System and Its Instruction Scheduling Method
  • Highly Parallel Computing System and Its Instruction Scheduling Method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. Although preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.

[0037] Artificial intelligence has developed rapidly in recent years, and has achieved good application results in the fields of image classification, detection, video and voice processing, and still has great development prospects. Neural network is the core of artificial intelligence applications, and deep learning neural network algorithm is one of the most common neural network models. The workload of neural networks is characterized by...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A high-parallel computing system and its instruction scheduling method are proposed. The computing system includes: an instruction reading and distributing module, which is used to read multiple categories of instructions in a specific order, and distributes the obtained instructions to corresponding functional modules according to categories; an internal cache; each of a plurality of functional modules that sequentially execute the instructions of this category distributed by the instruction reading and distribution module and read the required data from the internal cache; and wherein the specific sequence is passed by The above instructions are topologically sorted according to the directed acyclic graph composed of categories and dependencies. Therefore, based on topologically sorting the directed acyclic graph constructed according to the instruction category and dependency relationship to obtain values, deadlocks caused by instruction dependencies can be avoided through relatively simple operations. Preferably, the above sorting can be implemented in the instruction compilation stage, thereby further reducing the resource consumption of the computing system itself.

Description

technical field [0001] The present invention relates to the field of high-parallel computing, and more specifically, to a high-parallel computing system, an instruction scheduling method thereof, and a corresponding compiling method. Background technique [0002] Neural Network (Neural Network) has become a research hotspot in the field of image recognition in recent years. The trained neural network model can be used in many fields such as image classification, object recognition and saliency detection. In recent years, the neural network model has shown a trend of increasing computing scale and increasing complexity. The use of traditional CPU platforms has been unable to meet its practical requirements. Therefore, using heterogeneous computing platforms such as FPGA, GPU, and ASIC to design neural network accelerators has become a new research hotspot. Among them, compared with the GPU platform, FPGA and ASIC can realize more flexible hardware architecture and higher co...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F9/38G06N3/04G06N3/08G06N3/063
CPCG06N3/08G06N3/063G06F9/3814G06F9/3889G06N3/045
Inventor 于谦隋凌志方绍峡王俊斌单羿
Owner XILINX INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products