Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

334 results about "Reconfigurable computing" patented technology

Reconfigurable computing is a computer architecture combining some of the flexibility of software with the high performance of hardware by processing with very flexible high speed computing fabrics like field-programmable gate arrays (FPGAs). The principal difference when compared to using ordinary microprocessors is the ability to make substantial changes to the datapath itself in addition to the control flow. On the other hand, the main difference from custom hardware, i.e. application-specific integrated circuits (ASICs) is the possibility to adapt the hardware during runtime by "loading" a new circuit on the reconfigurable fabric.

Dynamically reconfigurable multi-stage parallel single instruction multiple data array processing system

The invention discloses a dynamically reconfigurable multi-stage parallel single instruction multiple data array processing system, which comprises a pixel level parallel processing element (PE) array and a row parallel row processor (RP) array, wherein the PE array is mainly used for finishing a linear operation part suitable for the parallel execution of all pixels in low-level and intermediate-level image processing; the RP array is used for operation suitable to be finished in a row parallel way or complex nonlinear operation in the low-level and intermediate-level processing; and particularly, the PE array can also be dynamically reconfigured into a two-dimensional self-organizing map (SOM) neural network with extremely low performance and area overhead, and the neural network can realize advanced image processing functions of high-speed parallel online training, feature recognition and the like with the coordination of RPs. The shortcoming that advanced image processing cannot be used for pixel level parallel RP arrays in the conventional programmable vision chip and the conventional parallel vision processor is completely overcome, and the implementation of a fully-functional, low-cost, low-power consumption, intelligent and portable high-speed real-time visual image system on chip is facilitated.
Owner:INST OF SEMICONDUCTORS - CHINESE ACAD OF SCI

Method and apparatus for a reconfigurable multi-media system

A reconfigurable multi-media system, method and device provides monitoring and reconfiguration of a plurality of communication layers of a communications stack to dynamically reconfigure the modulation and coding of software defined radio (SDR). The system includes a software object radio (SWR) library having reconfigurable object specification, design and performance parameters, the SWR is adapted for at least one of transmitting and receiving multi-media content via wireless communication; a controller in communication with the SWR library; a power management device module in communication with said controller; a reconfigurable encoder/decoder in communication with said controller to provide the SWR with dynamic coding information for modulation; a TCP/IP interface in communication with said reconfigurable encoder/decoder and said controller; and an application layer comprising a link layer and a reconfigurable physical layer in communication with each other and said controller, the physical layer adapted for communication with a channel, and the application layer including at least one driver for multi-media delivery. The controller monitors the physical layer and link layer information and the reconfigurable encoder/decoder dynamically reconfigures modulation and coding of multi-media content according to a cross-layer optimization approach.
Owner:UNILOC 2017 LLC

Separable array-based reconfigurable accelerator and realization method thereof

The invention provides a separable array-based reconfigurable accelerator and a realization method thereof. The reconfigurable accelerator comprises a scratchpad memory cache area, separable calculation arrays, and a register cache area, wherein the scratchpad memory cache area is used for realizing reuse of data of convolution calculation and sparsity full connection calculation, the separable calculation arrays comprise multiple reconfigurable calculation units and fall into a convolution calculation array and a sparsity full connection calculation array, the register cache area is a storage area formed by multiple registers, and provides input data, weight data and corresponding output results for convolution calculation and sparsity full connection calculation, input data and weight data of convolution calculation are input into the convolution calculation array, the convolution calculation array outputs a convolution calculation result, input data and weight data of the sparsity full connection calculation are input into the sparsity full connection calculation array, and the sparsity full connection calculation array outputs a sparsity full connection calculation result. Characteristics of two neural networks are fused, so that the calculation resource of the chip and the memory bandwidth use ratio are improved.
Owner:TSINGHUA UNIV

Reconfigurable network on a chip

An architecture for a reconfigurable network that can be implemented on a semiconductor chip is disclosed, which includes a hierarchical organization of network components and functions that are readily programmable and highly flexible. Essentially, a reconfigurable network on a chip is disclosed, which includes aspects of reconfigurable computing, system on a chip, and network on a chip designs. More precisely, a reconfigurable network on a chip includes a general purpose microprocessor for implementing software tasks, a plurality of on-chip memories for facilitating the processing of large data structures as well as processor collaboration, a plurality of reconfigurable execution units including self-contained, individually reconfigurable programmable logic arrays, a plurality of configurable system interface units that provide interconnections between on-chip memories, networks or buses, an on-chip network including a network interconnection interface that enables communication between all reconfigurable execution units, configurable system interface units and general purpose microprocessors, a fine grain interconnect unit that gathers associated input / output signals for a particular interface and attaches them to a designated system interface resource, and a plurality of input / output blocks that supply the link between an on-chip interface resource and a particular external network or device interface. Advantageously, the network minimizes the configuration latency of the reconfigurable execution units and also enables reconfiguration on-the-fly.
Owner:HONEYWELL INT INC

Reconfigurable network on a chip

An architecture for a reconfigurable network that can be implemented on a semiconductor chip is disclosed, which includes a hierarchical organization of network components and functions that are readily programmable and highly flexible. Essentially, a reconfigurable network on a chip is disclosed, which includes aspects of reconfigurable computing, system on a chip, and network on a chip designs. More precisely, a reconfigurable network on a chip includes a general purpose microprocessor for implementing software tasks, a plurality of on-chip memories for facilitating the processing of large data structures as well as processor collaboration, a plurality of reconfigurable execution units including self-contained, individually reconfigurable programmable logic arrays, a plurality of configurable system interface units that provide interconnections between on-chip memories, networks or buses, an on-chip network including a network interconnection interface that enables communication between all reconfigurable execution units, configurable system interface units and general purpose microprocessors, a fine grain interconnect unit that gathers associated input/output signals for a particular interface and attaches them to a designated system interface resource, and a plurality of input/output blocks that supply the link between an on-chip interface resource and a particular external network or device interface. Advantageously, the network minimizes the configuration latency of the reconfigurable execution units and also enables reconfiguration on-the-fly.
Owner:HONEYWELL INT INC

Computational fluid dynamics (CFD) coprocessor-enhanced system and method

The present invention provides a system, method and product for porting computationally complex CFD calculations to a coprocessor in order to decrease overall processing time. The system comprises a CPU in communication with a coprocessor over a high speed interconnect. In addition, an optional display may be provided for displaying the calculated flow field. The system and method include porting variables of governing equations from a CPU to a coprocessor; receiving calculated source terms from the coprocessor; and solving the governing equations at the CPU using the calculated source terms. In a further aspect, the CPU compresses the governing equations into combination of higher and/or lower order equations with fewer variables for porting to the coprocessor. The coprocessor receives the variables, iteratively solves for source terms of the equations using a plurality of parallel pipelines, and transfers the results to the CPU. In a further aspect, the coprocessor decompresses the received variables, solves for the source terms, and then compresses the results for transfer to the CPU. The CPU solves the governing equations using the calculated source terms. In a further aspect, the governing equations are compressed and solved using spectral methods. In another aspect, the coprocessor includes a reconfigurable computing device such as a Field Programmable Gate Array (FPGA). In yet another aspect, the coprocessor may be used for specific applications such as Navier-Stokes equations or Euler equations and may be configured to more quickly solve non-linear advection terms with efficient pipeline utilization.
Owner:VIRGINIA TECH INTPROP INC

Method for forecasting wind speed of high speed railway line

InactiveCN102063641ASolve problems in forecasting methodsGuaranteed accuracyNeural learning methodsDecompositionReconfigurable computing
The invention discloses a method for forecasting wind speed of a high speed railway line by combining a wavelet analysis method and a BP (Back Propagation) neural network. In the invention, actually measured wind speed data is subjected to multilayered decomposition and reconfigurable computing through the decomposition of wavelet analysis and a reconfiguration algorithm so as to decompose an original wind speed sequence into wind speed sequences of different scales. The method comprises the following steps of: normalizing all layers of decomposed wind speed sequences; training the neural network by utilizing an error BP (Back Propagation) learning algorithm until converging; respectively establishing corresponding forecasting models for the wind speed sequences of all the layers for forecasting; and finally, carrying out reconfigurable computing on wind speed forecasting values of all the layers to obtain a forecasting value of the original wind speed sequence. The invention overcomes the defects of low forecasting precision, too large time interval, and the like of a traditional method, realizes the wind speed forecasting of a high speed railway line under various climate types, has the advantages of short computing time and high forecasting precision and provides a scientific reference for formulating operation control regulations of the high speed railway.
Owner:PEKING UNIV

Reconfigurable CNN (Convolutional Neural Network) high concurrency convolution accelerator

The invention provides a reconfigurable CNN (Convolutional Neural Network) high concurrency convolution accelerator, which comprises a weight address generation unit, a result address generation unit,a reconfigurable calculation unit, a characteristic pattern address generation unit, a master controller and a memory exchange unit, wherein the weight address generation unit generates the address of convolution kernel data in a cache; the result address generation unit generates the address of result data in the cache; the reconfigurable calculation unit can reconfigure a calculation array intotwo multiply-accumulation tree circuits with different particle sizes; the characteristic pattern address generation unit generates the address of characteristic pattern data in the cache; the mastercontroller generates an accumulator resetting signal synchronous with the address, carries out gating on a corresponding circuit in the reconfigurable calculation unit, and generates an interrupt signal for the end of the whole operation; and the memory exchange unit converts an effective characteristic pattern read address and a weight read address into the read operation of a memory unit, and converts an effective result write address and data into a write operation for the memory unit. The accelerator has the beneficial effects that a control part is simplified, the degree of parallelism of a multi-channel convolution operation and memory access efficiency can be greatly improved, and occupied resources are reduced.
Owner:NANJING UNIV

Multi-kernel DSP reconfigurable special integrated circuit system

InactiveCN102073481AExcellent scheduling management abilityPowerful digital signal processing capabilityConcurrent instruction executionArchitecture with single central processing unitData interfaceIntegrated circuit
The invention discloses a multi- kernel DSP (digital signal processing) reconfigurable special integrated circuit system, belonging to the technical field of digital signal processing, and comprising: an internal bus, and a control processor kernel, an enhanced direct memory access, an input and output cache, a DSP multi-kernel array, a configuration information cache, a reconfigurable logic unitand an internal cache which are all connected with the internal bus, wherein the DSP multi-kernel array is connected with the configuration information cache and the reconfigurable logic unit througha reconfigurable on-chip interconnection mode and transmits the configuration information and reconfigurable information. The multi-kernel DSP reconfigurable special integrated circuit system can be well combined with an IP multiplexing technology of SoC (System on a Chip), a multi-kernel DSP reconfigurable ASIC (Application Specific Integrated Circuit) takes the DSP multi-kernel array as a core,and simultaneously, integrates IP modules, such as a logic control, an embedded memory, a data interface and the like, thereby being capable of flexibly and efficiently implement large scale computing.
Owner:SHANGHAI JIAO TONG UNIV +1

VLSI layouts of fully connected generalized and pyramid networks with locality exploitation

VLSI layouts of generalized multi-stage and pyramid networks for broadcast, unicast and multicast connections are presented using only horizontal and vertical links with spacial locality exploitation. The VLSI layouts employ shuffle exchange links where outlet links of cross links from switches in a stage in one sub-integrated circuit block are connected to inlet links of switches in the succeeding stage in another sub-integrated circuit block so that said cross links are either vertical links or horizontal and vice versa. Furthermore the shuffle exchange links are employed between different sub-integrated circuit blocks so that spacially nearer sub-integrated circuit blocks are connected with shorter links compared to the shuffle exchange links between spacially farther sub-integrated circuit blocks. In one embodiment the sub-integrated circuit blocks are arranged in a hypercube arrangement in a two-dimensional plane. The VLSI layouts exploit the benefits of significantly lower cross points, lower signal latency, lower power and full connectivity with significantly fast compilation.The VLSI layouts with spacial locality exploitation presented are applicable to generalized multi-stage and pyramid networks, generalized folded multi-stage and pyramid networks, generalized butterfly fat tree and pyramid networks, generalized multi-link multi-stage and pyramid networks, generalized folded multi-link multi-stage and pyramid networks, generalized multi-link butterfly fat tree and pyramid networks, generalized hypercube networks, and generalized cube connected cycles networks for speedup of s≧1. The embodiments of VLSI layouts are useful in wide target applications such as FPGAs, CPLDs, pSoCs, ASIC placement and route tools, networking applications, parallel & distributed computing, and reconfigurable computing.
Owner:KONDA TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products