Supercharge Your Innovation With Domain-Expert AI Agents!

Multi-master device cache control method and system

A cache control and master device technology, applied in memory systems, instruments, electrical and digital data processing, etc., can solve problems such as access performance limitations, inability to adapt to complex scenarios with multiple master devices, and cache solution failures, to extend life and reduce The number of reads and writes, the effect of reducing the number of reads and writes

Active Publication Date: 2022-01-11
NANJING SEMIDRIVE TECH CO LTD
View PDF11 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, in the scenario where multi-master devices share off-chip memory, the alternate access of different master devices will invalidate the single caching scheme.
Because the read and write speed of non-volatile memory is difficult to increase quickly in the short term, and the existing solution of adding cache on the chip cannot adapt to the complex scene of multi-master devices, the access performance under complex applications is limited

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-master device cache control method and system
  • Multi-master device cache control method and system
  • Multi-master device cache control method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0046] figure 1 It is a flow chart of the multi-master device cache control method according to the present invention, which will be referred to below figure 1 , the multi-master device cache control method of the present invention is described in detail.

[0047] First, in step 101, the AXI bus access from the master device is grouped.

[0048] In the embodiment of the present invention, in the scenario of multiple master devices, the AXI bus accesses from the master devices are first grouped. According to the master device ID (MID) and transmission ID (XID), combined with the group mask (Mask) and the preset matching value (Match) of each group, determine the device group (Master Group) to which an access belongs , the specific rules are as follows:

[0049] {MID, XID} & Mask == Match(M)

[0050] When the above conditions are met, it is considered that the device group to which the access belongs is the Mth group (group M). When multiple device groups meet the above con...

Embodiment 2

[0074] In an embodiment of the present invention, a multi-master cache control system is also provided. image 3 It is a structural diagram of a multi-master cache control system according to the present invention, such as image 3 As shown, the multi-master device cache control system of the present invention includes multiple master devices 10, non-volatile storage read-write controller 20, cache control unit 30, static random access memory 40, and non-volatile memory 50, of which,

[0075] A plurality of master devices 10, which send access requests to the non-volatile storage read-write controller 20 through the AXI bus;

[0076] The non-volatile storage read-write controller 20 groups the access requests of multiple master devices 10 .

[0077] In the embodiment of the present invention, the non-volatile storage read-write controller 20 combines the group mask (Mask) and the preset matching value of each group according to the two identifiers of the main device ID (MID)...

Embodiment 3

[0083] In one embodiment of the present invention, a control chip is also provided, including the multi-master cache control system in Embodiment 2, the multi-master cache control system, when multiple masters access the off-chip non-volatile memory , group the master devices, allocate the limited on-chip cache resources to different master device groups in static and dynamic ways, and make adaptive adjustments according to access frequency and characteristics to optimize access efficiency and reduce non-volatile memory reads Write times.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-master device cache control method. The method comprises the following steps of: 1) grouping accesses from master devices, and judging a master device group to which the accesses belong; 2) dividing a cache space into a plurality of cache units, and distributing the cache units to the master device group; 3) receiving read access from any master device, and searching required data in the cache space; and 4) returning data to the master device. The invention further provides a multi-master-device cache control system. Under the scene that multiple master devices access an off-chip nonvolatile memory, the read-write frequency of the memory can be reduced, the service life of the memory can be prolonged, and the data access efficiency of the on-chip master devices can be improved.

Description

technical field [0001] The present invention relates to the technical field of memory access control, in particular to a multi-master cache control method and system. Background technique [0002] In board-level systems based on MCU / MPU, using off-chip non-volatile memory for program or data access is a widely used solution. The read and write speed of off-chip memory has a direct impact on system performance. With the increasing integration and complexity of chips, multi-core processors and other main devices are often included in the same chip. The startup code and software complexity have also been further improved. New requirements have been put forward for the storage capacity and read / write speed of the non-volatile memory. [0003] In the prior art, the capacity and read / write rate of off-chip non-volatile memory are constantly increasing, and the increase in read / write rate mainly depends on higher frequency clock, double-edge sampling (DDR), and wider data bus (...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F12/0893G06F12/0866
CPCG06F12/0893G06F12/0866
Inventor 巩少辉张力航刘雄飞叶巧玉
Owner NANJING SEMIDRIVE TECH CO LTD
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More