Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

111results about How to "Improve memory access efficiency" patented technology

Consistency maintenance device for multi-kernel processor and consistency interaction method

The invention discloses a consistency maintenance device for a multi-kernel processor and a consistency interaction method, mainly solving the technical problem of large directory access delay in a consistency interaction process for processing read-miss and write-miss by a Cache consistency protocol of the traditional multi-kernel processor. According to the invention, all kernels of the multi-kernel processor are divided into a plurality nodes in parallel relation, wherein each node comprises a plurality of kernels. When the read-miss and the write-miss occur, effective data transcription nodes closest to the kernels undergoing the read-miss and the write-miss are directly predicted and accessed according to node predication Cache, and a directory updating step is put off and is not performed until data access is finished, so that directory access delay is completely concealed and the access efficiency is increased; a double-layer directory structure is beneficial to conversion of directory storage expense from exponential increase into linear increase, so that better expandability is achieved; and because the node is taken as a unit for performing coarse-grained predication, the storage expense for information predication is saved compared with that for fine-grained prediction in which the kernel is taken as a unit.
Owner:XI AN JIAOTONG UNIV

Vector data access control method and vector memory that support limited sharing

The invention discloses a vector data access and storage control method supporting limited sharing and a vector memory. The method comprises the following steps of: 1) uniformly addressing the vector memory; 2) acquiring the access and storage information; performing the decomposition, expansion and displacement circular arrangement on the vector address in the access and storage information so as to generate N sets of access and storage information; and 3) respectively sending the N sets of access and storage information to the access and storage flow line of the vector memory; and if the current vector access and storage command is the reading command, performing the opposite displacement circular arrangement on the N paths of writing-back data according to the shared displacement address to obtain the N sets of writing-back data and send the data to the corresponding vector processing unit in the vector processor. The vector memory comprises a vector address generator, a vector memory unit and an access and storage management control unit; and the access and storage management control unit comprises a vector address arrangement unit and a vector data arrangement unit. The method has the advantages of realizing the hardware at low cost, and supporting the limited sharing of the vector data and the non-aligned access.
Owner:NAT UNIV OF DEFENSE TECH

Unified bit width converting structure and method in cache and bus interface of system chip

The invention discloses a unified bit width converting structure and method in a cache and a bus interface of a system chip. The converting structure comprises a processor core and a plurality of IP cores carrying out data interaction with the processor core through an on-chip bus, and a memorizer controller IP is communicated with an off-chip main memorizer. The processor core comprises an instruction assembly line and a hit judgment logic unit receiving an operation instruction of the instruction assembly line. An access bit width judgment unit and a bit width/address converting unit are arranged between the hit judgment logic unit and the cache bus interface, the hit judgment logic unit sends a judgment result to the instruction assembly line, and the processor core is connected with the on-chip bus through the cache bus interface. According to the converting method, for the read access of a byte or a half byte, if cache deficiency happens and the access space belongs to the cache area, the bit width/address converting unit converts the read access of the byte or the half byte into one-byte access, access and storage are finished through the bus, an original updating strategy is not affected, and flexibility can exist.
Owner:NO 771 INST OF NO 9 RES INST CHINA AEROSPACE SCI & TECH

Multi-core DMA (direct memory access) subsection data transmission method used for GPDSP (general purpose digital signal processor) and adopting slave counting

The invention discloses a multi-core DMA (direct memory access) subsection data transmission method used for a GPDSP (general purpose digital signal processor) and adopting slave counting. The multi-core DMA subsection data transmission method includes that (1), host DMA starts and generates a subsection data transmission request according to configuration parameters, and a reading-data return data selection vector is carried in a subsection data transmission reading request sent out by the DMA each time and indicates a target DSP inner core of return data; (2), a reading data return data selection vector corresponding to the reading request is carried in data returned by an out-of-core storage part, and an on-chip network explains a signal domain and sends data according to a DSP inner core corresponding to an effective bit vector; (3), the DMA of the DSP inner core after receiving the return data forwards the data to an in-core storage part AM or SM and counts at the same time; (4), after counting is completed, setting service completes identifier register. The multi-core DMA subsection data transmission method has the advantages of simple principle, convenience in operation, configuration flexibility, higher memory access efficiency and the like.
Owner:NAT UNIV OF DEFENSE TECH

Multi-core DMA (direct memory access) subsection data transmission method used for GPDSP and adopting host counting

ActiveCN104679691AReduce the amount of access requestsEffective perception of memory access characteristicsElectric digital data processingDirect memory accessTransfer procedure
The invention discloses a multi-core DMA (direct memory access) subsection data transmission method used for GPDSP and adopting host counting. The multi-core DMA subsection data transmission method includes that host DMA starts and generates a subsection data transmission request according to configuration parameters, and a return data selection vector marking a return data target node is carried in a subsection data transmission reading request sent out by the host DMA each time, each bit of the return data selection vector indicates whether a corresponding core is a target node reading return data ore not; when data corresponding to the reading request are returned, an on-chip network distributes the data to corresponding DMA according to return data selection vector; the host DMA counts transmission data; after counting is completed, the host DMA sends signals for emptying receiving cache to all slave DMA taking part in transmission service; after the slave DMA empties the receiving cache, data transmission service is completed. The multi-core DMA subsection data transmission method has the advantages of simple principle, convenience in operation, configuration flexibility, high memory access efficiency and the like.
Owner:NAT UNIV OF DEFENSE TECH

Video decoder of dual-buffer-memory structure and control method

The invention provides a video decoder of a dual-buffer-memory structure. The video decoder comprises a storage access control register, a main storage interface, an auxiliary storage interface, a bus matrix, a main storage, an auxiliary storage and an encoder body. According to the format of input and compressed video streams, a main controller configures the storages used by hardware functional modules of the decoder body through the storage access control register, and sets corresponding access and storage addresses and spaces. After the decoder body receives a decoding start command sent by the main controller, the hardware functional modules in the decoder body are started and carry out concurrent execution, and an access and storage request is sent to the main storage interface or the auxiliary storage interface according to setting of a storage access controller. The main storage interface and the auxiliary storage interface receive the access and storage request of the hardware functional modules of the decoder body and have access to the corresponding storage after arbitration. According to a method and a circuit, a storage centralized management mode is adopted, different storage access modes are configured to the hardware functional modules of the decoder body according to the video streams of a multi-standard compressed encoding format, the areas of the storages are saved, and bandwidth requirements of a system are effectively reduced.
Owner:CHINA AGRI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products