Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

222results about How to "Reduce read" patented technology

A convolutional neural network accelerator based on calculation optimization of an FPGA

The invention discloses a convolutional neural network accelerator based on calculation optimization of an FPGA. The convolutional neural network accelerator comprises an AXI4 bus interface, a data cache region, a pre-fetched data region, a result cache region, a state controller and a PE array. The data cache region is used for caching feature map data, convolution kernel data and index values read from an external memory DDR through an AXI4 bus interface; The pre-fetched data area is used for pre-fetching feature map data needing to be input into the PE array in parallel from the feature mapsub-cache area; The result cache region is used for caching a calculation result of each row of PE; The state controller is used for controlling the working state of the accelerator to realize conversion between the working states; And the PE array is used for reading the data in the pre-fetched data area and the convolution kernel sub-cache area to carry out convolution operation. The accelerator utilizes the characteristics of parameter sparsity, repeated weight data and an activation function Relu to end redundant calculation in advance, so that the calculation amount is reduced, and the energy consumption is reduced by reducing the access memory frequency.
Owner:SOUTHEAST UNIV +2

Rapid data recovery method and system based on crossed code correction and deletion

ActiveCN106844098AAutomatic Priority AdjustmentAutomatically adjust the number of threadsRedundant data error correctionRedundant operation error correctionExclusive orData recovery
The invention provides a rapid data recovery method and system based on crossed code correction and deletion. The method comprises the steps that calculation is performed according to an LRC encoding mode when data is written in, global encoding blocks are grouped in pairs, the global encoding blocks in each group are divided into two parts, after being subjected to exclusive-or operation, the second half part of one global encoding block and the first half part of the other global encoding block are written into the second half part of the current global encoding block, when the first global encoding block is lost, the second half part of the data block is read, the two second half parts, obtained before the exclusive-or operation is performed, of the lost data block are obtained, the first half part of the lost data block is obtained after the second half part, obtained after exclusive-or operation is performed, of the other global encoding block is subjected to exclusive-or operation, the data obtained before exclusive-or operation of the lost data block and the first half part of the other global encoding block are subjected to exclusive-or operation to obtain the second half part of the lost data block; the data is brushed back to a corresponding magnetic disk in a stripe mode to be stored, and after the data is written into a storage server, asynchronization longitudinal encoding calculation is performed.
Owner:INST OF COMPUTING TECH CHINESE ACAD OF SCI +1

Implementation method for operator reuse in parallel database

The invention discloses an implementation method for operator reuse in a parallel database, comprising the following steps of: step 1, generating a serial query plan for query through a normal query planning method, wherein the query plan is a binary tree structure; step 2, executing the query plane by scanning from top to bottom, searching materialized reusable operators, changing the query plane structure, and changing thread level materialized operators into global reusable materialized operators; step 3, parallelizing the query plan changed in the step 2, and generating a plan forest for parallel execution of a plurality of threads; step 4, executing global reusable operator combination on the plan forest generated in the step 3, and generating a directed graph plan for the materialized reusable operators capable of being executed by the plurality of threads in parallel; step 5, executing own plan part in the directed graph by each thread in parallel, wherein the thread which executes the global reusable operator firstly is called a main thread, the main thread locks the global reusable operator and truly executes the operator and the plan of the operator, and other threads wait; step 6, unlocking the global reusable operator by the main thread after execution, wherein other threads start to read data from the global reusable operator and continue to execute own plan tree;and step 7, releasing the materialized data of the operator by the main thread after all the plans read the data of the global reusable operator.
Owner:天津神舟通用数据技术有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products