Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

65 results about "Interconnection architecture" patented technology

Hardware interconnection architecture of reconfigurable convolutional neural network

The invention belongs to the technical field of hardware design of image processing algorithms, and specifically discloses hardware interconnection architecture of a reconfigurable convolutional neural network. The interconnection architecture comprises a data and parameter off-chip caching module, a basic calculation unit array module and an arithmetic logic unit calculation module, wherein the data and parameter off-chip caching module is used for caching pixel data in input to-be-processed pictures and parameters input during convolutional neural network calculation; the basic calculation unit array module is used for realizing core calculation of the convolutional neural network; and the arithmetic logic unit calculation module is used for processing calculation results of the basic calculation unit array module and accumulating a down-sampling layers, activation functions and partial sums. The basic calculation unit array module is interconnected according to a two-dimensional array manner; in a row direction, input data is shared and parallel calculation is realized by using different pieces of parameter data; and in a column direction, a calculation result is transferred rowby row to serve as input of the next row to participate in the operation. The hardware interconnection architecture is capable of reducing the bandwidth demand while enhancing the data reusing ability through structure interconnection.
Owner:FUDAN UNIV

RapidIO network recursive enumeration method

InactiveCN103746910ASolve the problem that the optimal path cannot be selectedData switching networksDistributed computingExternal connection
The invention pertains to the field of data communication, and relates to an RapidIO network recursive enumeration method. The invention provides an RapidIO network recursive enumeration networking method based on depth-first path weighting, the shortest communication path between a master machine and a slave machine in the RapidIO network is caculated, and an optimal path dynamic enumeration method which has advantages of simple structure, good adaptability and good reliability and can be used to support the change of external connection device. IDT Tsi578 is used as the RapidIO networking switchboard to realize the optimal RapidIO networking scheme based on the Tsi578 switchboard device. The method of the invention adopts the following technical scheme that in a system in which the RapidIO bus is used as the interconnection architecture, the RapidIO master machine mode is taken as the reference, and the route hop is taken as the weighted value to calculate the distances between each slave machine node in the RapidIO network and RapidIO master machine mode, for each slave machine mode that newly found, the shortest path between each slave machine mode and the RapidIO master machine mode is selected dynamically, and the route information table of each slave machine mode ID and the corresponding switchboard device Tsi578 is set according to the shortest path.
Owner:SUZHOU CHANGFENG AVIATION ELECTRONICS

Network enumeration method of Rapid IO bus interconnection system

The invention brings forward a network enumeration method of a Rapid IO bus interconnection system and aims at provides an enumeration method that is only connected with a host node and is capable of adapting to a network size change and fixed mapping between a physical address and a network address. The method is realized by using the following technical scheme: a connected graph G0 (V,E) is constructed in a system using a Rapid IO bus as an interconnection architecture and is used as a network template stored in a host node, wherein V is a peak set describing network nodes and each peak v in the peak set V contains a network address and a physical address that are distributed in advance by each node; and a distribution strategy of predefined addresses of the network template is added in a network enumeration algorithm containing a network enumeration main function and a node setting subfunction; and for each network node that is newly discovered, a network address and a port number of an enumerated port are used as key words to search a matched node in the G0, wherein the matched node serves as a new node distribution address. And in the network enumeration method, the node setting function serves as a subfunction that is invoked by enumerated function.
Owner:10TH RES INST OF CETC

Network architecture generation method and device, readable storage medium and computer equipment

The invention relates to a network architecture planning information generation method and device, a computer readable storage medium and computer equipment. The method comprises the steps: obtaininginterconnection information between a to-be-constructed network architecture object and an interconnection architecture object; searching a hardware module model of the hardware module matched with the interconnection port attribute, and determining the number of the hardware modules of the hardware module according to the number of the interconnection opposite ends and the number of the interconnection ports; obtaining hardware material information for constructing the network architecture object according to the hardware module model and the hardware module number; generating connection relation information of interconnection ports between the network architecture object and the interconnection architecture object according to the number of the interconnection opposite ends and the number of the interconnection ports; and generating network architecture planning information according to the hardware material information and the connection relation information. According to the schemeprovided by the invention, the network architecture planning information meeting the construction requirement of the business server can be dynamically generated, and the waste of equipment resourcesis effectively avoided.
Owner:TENCENT TECH (SHENZHEN) CO LTD

Asynchronous communication interconnection architecture and brain-like chip with same

The invention relates to the technical field of artificial neural networks, in particular to an asynchronous communication interconnection architecture and a brain-like chip with the architecture, and the asynchronous communication interconnection architecture comprises an in-chip asynchronous communication interconnection architecture, an inter-chip asynchronous communication interconnection architecture, a neuron calculation unit and an on-chip routing unit; the neuron calculation unit and the on-chip routing unit are respectively provided with an independent clock management module in an independent clock domain, the neuron calculation unit and the on-chip routing unit are connected in the same brain chip, and the on-chip routing unit and the adjacent on-chip routing unit are connected through the on-chip asynchronous communication interconnection architecture. The adjacent brain-like chips are connected with each other through the asynchronous communication interconnection architecture between the chips. According to the method, a large number of neuron computing units can be efficiently integrated in the brain-like chip, and meanwhile, the brain-like chip is efficiently cascaded and expanded, so that neuron computing resources of a large scale are obtained.
Owner:ZHEJIANG LAB +1

SSD (Solid State Disk) with multi-channel full-interconnection architecture and control method of SSD

InactiveCN112114744AReduce the problem of uneven resource utilizationIncrease profitInput/output to record carriersComputer hardwareComputer architecture
The invention discloses an SSD (Solid State Disk) with a multi-channel full-interconnection architecture and a control method of the SSD, which are characterized in that channels and chips are connected by utilizing a full-interconnection network, specific IDs (Identities) are distributed to different channels and chips in a routing chip through an SSD main control algorithm, different ID connections are controlled by sending commands through control pins, and the main control algorithm maintains a channel and chip state list. When a read-write request comes, the main control algorithm traverses the channels and the chip state list, finds out the channels and the chips meeting the requirements, and selects the first channel and the first chip meeting the requirements from the channels andthe chips meeting the requirements by adopting an FIFO strategy after the channels and the chips meeting the requirements exist, so that SSD solid-state disk control of a multi-channel full-interconnection architecture is realized. Interconnection between any channel and any chip is realized, and any chip and any channel do not conflict with links of other existing chips and channels after being interconnected. According to the invention, the problem of non-uniform channel utilization rate caused by conditions such as workload and the like is effectively reduced, and the read-write performanceof the SSD is improved.
Owner:XI AN JIAOTONG UNIV

Buffer-free optical interconnect architecture and method for data center

The invention discloses a buffer-free optical interconnection architecture and method for a data center. The architecture is divided into an inlet wire clamp, a three-stage exchange module and an outlet wire clamp, wherein the exchange module comprises a first-stage AWGR, a third-stage AWGR and a second-stage TWC; the first-stage AWGR input port is provided with an inlet wire clamp, the third-stage AWGR output port is provided with an outlet wire clamp, and the first-stage AWGR output port and the third-stage AWGR input port are connected through a second-stage TWC. Signals of the server are subjected to wavelength division multiplexing and then forwarded to the switching module through wavelength modulation of the entrance line card, optical signals with specific wavelengths are forwardedto the exit line card through TWC wavelength conversion and AWGR cyclic routing, and the exit line card processes the optical signals and then sends the optical signals to the server. High-capacity,high-reliability and low-power-consumption all-optical interconnection is achieved, a data center is effectively flattened, the defects that a traditional space switch is low in port bandwidth and limited in port number are overcome, and the all-optical interconnection system has the advantages of being high in reliability, low in complexity, low in delay and the like.
Owner:ZHEJIANG UNIV +1
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products