Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

171 results about "Data reuse" patented technology

Convolution neural network (CNN) hardware accelerator and acceleration method

The invention discloses a convolution neural network (CNN) hardware accelerator and an acceleration method. The accelerator comprises an input buffer and a plurality of operation units , and is characterized in that the input buffer is used for caching input feature picture data, the plurality of operation units respectively share the input feature picture data to perform a CNN convolution operation, each operation unit comprises a convolution kernel buffer, an output buffer and a multiplier-adder unit formed by a plurality of MAC components, the convolution kernel buffer receives convolutionkernel data returned from an external storage component, the convolution kernel data is provided for each MAC component of the multiplier-adder unit, each MAC component receives the input feature picture data and the convolution kernel data to perform a multiply accumulation operation, and an intermediate result of the operation is written into the output buffer. The acceleration method is a method applying the accelerator. The CNN hardware accelerator and the acceleration method can improve the CNN hardware acceleration performance, and have the advantages of high data reuse rate and efficiency, small amount of data migration, good expansibility, small bandwidth required by the system, small hardware overhead and the like.
Owner:NAT UNIV OF DEFENSE TECH

Cache replacement method under heterogeneous memory environment

ActiveCN104834608AComprehensive consideration of memory access characteristicsIncrease space sizeMemory adressing/allocation/relocationHardware structurePhase-change memory
The invention discloses a cache replacement method under heterogeneous memory environment. The method is characterized by comprising the steps: adding one source flag bit in a cache line hardware structure for flagging whether cache line data is derived from a DRAM (Dynamic Random Access Memory) or a PCM (Phase Change Memory); adding a new sample storage unit in a CPU (Central Processing Unit) for recording program cache access behaviors and data reusing range information; the method also comprises three sub methods including a sampling method, an equivalent position calculation method and a replacement method, wherein the sampling sub method is used for performing sampling statistics on the cache access behaviors; the equivalent position calculation sub method is used for calculating equivalent positions, and the replacement sub method is used for determining a cache line needing to be replaced. According to the cache replacement method, for the cache access characteristic of a program under the heterogeneous memory environment, a traditional cache replacement policy is optimized, the high time delay cost that the PCM needs to be accessed due to cache missing can be reduced by implementing the cache replacement method, and thus the cache access performance of a whole system is improved.
Owner:HUAZHONG UNIV OF SCI & TECH

Hardware interconnection architecture of reconfigurable convolutional neural network

The invention belongs to the technical field of hardware design of image processing algorithms, and specifically discloses hardware interconnection architecture of a reconfigurable convolutional neural network. The interconnection architecture comprises a data and parameter off-chip caching module, a basic calculation unit array module and an arithmetic logic unit calculation module, wherein the data and parameter off-chip caching module is used for caching pixel data in input to-be-processed pictures and parameters input during convolutional neural network calculation; the basic calculation unit array module is used for realizing core calculation of the convolutional neural network; and the arithmetic logic unit calculation module is used for processing calculation results of the basic calculation unit array module and accumulating a down-sampling layers, activation functions and partial sums. The basic calculation unit array module is interconnected according to a two-dimensional array manner; in a row direction, input data is shared and parallel calculation is realized by using different pieces of parameter data; and in a column direction, a calculation result is transferred rowby row to serve as input of the next row to participate in the operation. The hardware interconnection architecture is capable of reducing the bandwidth demand while enhancing the data reusing ability through structure interconnection.
Owner:FUDAN UNIV

An implementation method for uplink pilot frequency insertion and data reuse

The invention discloses a realization method of ascending pilot inserting and data multiplexing, it is fit for ascending pilot construction of single carrier frequency division multiplex, a sub frame of said pilot construction includes the first ascending pilot and the second ascending pilot, the method sets data in integral mode, the first ascending pilot is set in the time domain code-division or frequency domain code-division mode, the second ascending pilot is set in frequency-division mode; or the data is set in integral mode, the first ascending pilot is set in the frequency-division mode, the second ascending pilot is set in time domain code-division or frequency domain code-division mode; or the data is set in sub carrier distributed mode, the first ascending pilot is set in time domain code-division or frequency domain code-division mode, the second ascending pilot is set in the frequency-division mode; or the data is set in sub carrier distributed mode, the first ascending pilot is set in frequency-division mode, the second ascending pilot is set in the time domain code-division or frequency domain code-division mode. The invention adopts proper pilot inserting and data multiplexing technique to make system realize channel compensation and flexible dispatching.
Owner:ZTE CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products