Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Deep learning network application distributed self-assembly instruction processor core, processor, circuit and processing method

A deep learning network and application distribution technology, applied in the field of deep learning network application distribution self-assembly instruction processor core, can solve the problems of lack of system adaptability, lack of flexibility, lack of solutions, etc., to achieve architecture adaptability , increase the number of calculations, reduce storage capacity and the effect of logic resource usage

Active Publication Date: 2022-03-04
SHANDONG NORMAL UNIV
View PDF9 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] To sum up, the inventors found that in the prior art, problems such as complex circuit implementation, low anti-interference, low reusability, and high hardware cost, especially those lacking sufficient flexibility and system adaptability problem, there is no effective solution

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep learning network application distributed self-assembly instruction processor core, processor, circuit and processing method
  • Deep learning network application distributed self-assembly instruction processor core, processor, circuit and processing method
  • Deep learning network application distributed self-assembly instruction processor core, processor, circuit and processing method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0048] According to an aspect of one or more embodiments of the present disclosure, a deep learning network application distributed self-assembly instruction processor core is provided.

[0049] Such as figure 1 As shown, a deep learning network application distributed self-assembly instruction processor core, the processor core includes:

[0050] Four register interface modules, a preparation module, a convolution operation module and a pooling operation module are sequentially arranged between the two register interface modules;

[0051] The register interface module is configured as a connection register;

[0052] The preparation module is configured to prepare data windows and their corresponding coefficients;

[0053] The convolution operation module is configured as a data window and a corresponding filter kernel convolution operation, and its convolution kernel parameters can be configured;

[0054] The pooling module is configured to perform pooling operations.

[...

Embodiment 2

[0059] According to an aspect of one or more embodiments of the present disclosure, a deep learning network application distributed self-assembly instruction processor is provided.

[0060] Such as Figure 2-3 As shown, a deep learning network application distributed self-assembly instruction processor, including: several processor cores and instruction statistics distribution modules;

[0061] The instruction statistics distribution module is configured to count the instructions of the deep convolutional network and distribute the instruction stream;

[0062] The instruction statistics allocation module is respectively connected to the processor core through the instruction stack module, and the instruction stack module is configured to receive and store the instruction stream allocated by the instruction statistics allocation module, and perform multi-instruction according to the stored instruction stream The accelerated operation of the stream controls the processor cores ...

Embodiment 3

[0067] On the basis of a deep learning network application distributed self-assembly instruction processor disclosed in Embodiment 2, the instruction flow of "preparing data windows and corresponding coefficients + single convolution + pooling" is executed, such as Figure 4 As shown in , the register interface module, preparation module, convolution operation module and pooling operation module in each processor core form a deep neural convolutional network architecture, Figure 4 Shown in gray.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The disclosure discloses a deep learning network application distributed self-assembly instruction processor core, a processor, a circuit and a processing method. The processor core includes: four register interface modules, and two register interface modules are arranged in sequence. A preparation module, a convolution operation module, and a pooling operation module; the processor includes: the instruction statistics distribution module configured to count deep convolutional network instructions and distribute instruction streams; the instruction statistics distribution module respectively The processor core is connected through an instruction stack module, and the instruction stack module is configured to receive and store the instruction stream allocated by the instruction statistics distribution module, and perform multi-instruction stream acceleration operations according to the stored instruction stream, and control all The above-mentioned processor cores form different deep neural convolutional network architectures for calculation and processing.

Description

technical field [0001] The disclosure belongs to the technical field of hardware circuit design, and relates to a deep learning network application distributed self-assembly instruction processor core, a processor, a circuit and a processing method. Background technique [0002] The statements in this section merely provide background information related to the present disclosure and do not necessarily constitute prior art. [0003] With the development of artificial intelligence neural convolutional network technology, deep neural network occupies most of the computing power, which requires fast and effective computing and consumes less hardware circuit resources. The inventors found that existing deep neural network processing systems have certain problems, which are mainly reflected in: high circuit resource overhead, lack of sufficient flexibility, and lack of sufficient system adaptability. [0004] The patent application number is "CN201610342944.6". The applicant dis...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06N3/063G06N3/04G06N3/08
CPCG06N3/063G06N3/08G06N3/045
Inventor 孙建辉蔡阳健李登旺
Owner SHANDONG NORMAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products