Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

31results about How to "Reduce access overhead" patented technology

Unified bit width converting structure and method in cache and bus interface of system chip

The invention discloses a unified bit width converting structure and method in a cache and a bus interface of a system chip. The converting structure comprises a processor core and a plurality of IP cores carrying out data interaction with the processor core through an on-chip bus, and a memorizer controller IP is communicated with an off-chip main memorizer. The processor core comprises an instruction assembly line and a hit judgment logic unit receiving an operation instruction of the instruction assembly line. An access bit width judgment unit and a bit width/address converting unit are arranged between the hit judgment logic unit and the cache bus interface, the hit judgment logic unit sends a judgment result to the instruction assembly line, and the processor core is connected with the on-chip bus through the cache bus interface. According to the converting method, for the read access of a byte or a half byte, if cache deficiency happens and the access space belongs to the cache area, the bit width/address converting unit converts the read access of the byte or the half byte into one-byte access, access and storage are finished through the bus, an original updating strategy is not affected, and flexibility can exist.
Owner:NO 771 INST OF NO 9 RES INST CHINA AEROSPACE SCI & TECH

Method for storing diagonal data of sparse matrix and SpMV (Sparse Matrix Vector) realization method based on method

InactiveCN102141976BReduce demandReduce memory access overheadComplex mathematical operationsSparse matrix vectorArray data structure
The invention discloses a method for storing diagonal data of a sparse matrix and a SpMV realization method based on the method. The storage method comprises the following steps of: (1) scanning a sparse matrix A line by line and representing a position of a non-zero-element diagonal by using number of the diagonal; (2) segmenting the matrix A into a plurality of sparse sub-matrixes by using an intersection of the non-zero-element diagonal and the lateral side of the matrix A as a horizontal line; and (3) storing elements on the non-zero-element diagonal in each sparse matrix to a val array according to the line sequence. The SpMV realization method comprises the following steps of: (1) traversing the sparse matrixes and calculating vector multiplier y=A1*x of the sparse matrix in each sparse sub-matrix; and (2) merging the vector multipliers of all sparse sub-matrixes. The data storage method disclosed by the invention is not required to store row indexes of the non-zero elements, thereby reducing access expense and requirements on a storage space; a smaller storage space is occupied by the diagonal and the index array of the x array, so that the access complexity is reduced; andall the data required for calculation are continuously accessed, so that a complier and hardware can be optimized sufficiently.
Owner:INST OF SOFTWARE - CHINESE ACAD OF SCI

Lossless recovery distributed multilingual retrieval platform and lossless recovery distributed multilingual retrieval method

The invention provides a lossless recovery distributed multi-language retrieval platform and a lossless recovery distributed multi-language retrieval method. The lossless recovery distributed multilingual retrieval platform comprises a main node and distributed nodes communicating with the main node, wherein the main node and the distributed nodes are respectively and correspondingly connected with an external storage device, and the external storage device is configured to store data and memory states received by the main node or the distributed nodes connected with the external storage device at preset time intervals; when a fault is recovered, data in the external storage device is directly recovered to a local memory, the data is adjusted, and a routing algorithm is operated to enablethe routing algorithm to point to a new node; the main node is configured to issue multilingual data meeting retrieval conditions to the distributed nodes; the distributed nodes are configured to query multilingual data meeting retrieval conditions in a hotspot index table of an index memory cache layer; and multilingual data of which the access frequency is not less than a preset access frequencythreshold exists in the hotspot index table.
Owner:安徽芃睿科技有限公司

A centralized interface communication concurrency control system and control method thereof

The invention belongs to the technical field of Internet of Things data services, and aims to provide a centralized interface communication concurrency control system and a control method thereof. The present invention includes a user front-end module, a central processing platform, a central front-end module, and a message queue. The interface transaction mode between the central processing platform and a single point adopts a multi-thread synchronous transaction mode, and the fault and performance bottleneck of a single point are eliminated. It will affect the access of other points; the user front-end module is provided with a memory for storing user verification information, and judging whether the user who initiates the transaction has transaction authority through comparison, and performing an integrity check on the user's transaction data ; The present invention optimizes the existing interface communication mode and improves the interface transaction performance. When the central processing platform conducts transactions with multiple single points, through the multi-threaded processing control of the interface transactions, the extension of single point performance problems to the central processing platform is avoided. Dynamic allocation of single-point concurrent requests improves system performance and reduces waste of request overhead.
Owner:咸亨国际电子商务有限公司

A sparse matrix storage method using compressed sparse rows with local information and an implementation method of spmv based on the method

The invention discloses a sparse matrix storage method CSRL (Compressed Sparse Row with Local Information) and an SpMV (Sparse Matrix Vector Multiplication) realization method based on the same. The storage method comprises the following steps of scanning a sparse matrix A in rows and storing each non-zero element value information in an array val sequence; defining a plurality of non-zero elements with continuous row subscripts as a continuous non-zero element section, recording a row subscript of a first element in each continuous non-zero element section by use of an array jas, and recording the number of non-zero elements in each continuous non-zero element section by use of an array jan; and recording an initial index of a first continuous non-zero element section in each row of the sparse matrix A by use of an array ptr. According to the data storage method, row indexes of the non-zero elements are combined and stored, so that the storage space requirement is reduced, the data locality of the sparse matrix is fully excavated, access and calculation can be performed by use of an SIMD (Single Instruction Multiple Data) instruction, the access frequency of an internal storage can be reduced, and the SpMV performance is improved.
Owner:INST OF SOFTWARE - CHINESE ACAD OF SCI

Unified Bit Width Conversion Method for Cache and Bus Interface in System Chip

The invention discloses a unified bit width converting structure and method in a cache and a bus interface of a system chip. The converting structure comprises a processor core and a plurality of IP cores carrying out data interaction with the processor core through an on-chip bus, and a memorizer controller IP is communicated with an off-chip main memorizer. The processor core comprises an instruction assembly line and a hit judgment logic unit receiving an operation instruction of the instruction assembly line. An access bit width judgment unit and a bit width / address converting unit are arranged between the hit judgment logic unit and the cache bus interface, the hit judgment logic unit sends a judgment result to the instruction assembly line, and the processor core is connected with the on-chip bus through the cache bus interface. According to the converting method, for the read access of a byte or a half byte, if cache deficiency happens and the access space belongs to the cache area, the bit width / address converting unit converts the read access of the byte or the half byte into one-byte access, access and storage are finished through the bus, an original updating strategy is not affected, and flexibility can exist.
Owner:NO 771 INST OF NO 9 RES INST CHINA AEROSPACE SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products