Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Dynamic two-level cache flash translation layer address mapping method based on page-level mapping

A flash memory conversion layer and second-level cache technology, applied in the field of computer science, can solve the problems of I/O performance degradation, and achieve the effect of increasing response time, improving performance, and reducing the number of modifications

Inactive Publication Date: 2019-05-10
BEIHANG UNIV
View PDF6 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In order to solve the above-mentioned problems, the present invention proposes a dynamic L2 cache flash memory translation layer address mapping method based on page-level mapping, aiming at the problem that the current page-level address mapping fails to make full use of the sequential I / O locality and the random I / O The problem of I / O performance degradation when there are many I / Os, using two-level cache, and adopting different cache management strategies on the two-level cache, making full use of the temporal locality and spatial locality of sequential I / O, and we combined Dynamic cache adjustment to deal with scenes with more random I / O, which not only reduces the occupation of SRAM, but also improves the cache hit rate and performance in random I / O mode

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Dynamic two-level cache flash translation layer address mapping method based on page-level mapping
  • Dynamic two-level cache flash translation layer address mapping method based on page-level mapping
  • Dynamic two-level cache flash translation layer address mapping method based on page-level mapping

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0045] In order to make the purpose, technical solutions and advantages of the present invention more clearly expressed, the following describes the embodiments of the present invention in detail with reference to the drawings and specific implementation steps, but it is not intended to limit the present invention.

[0046] figure 1 It is a schematic diagram of the structure of the dynamic L2 cache flash translation layer address mapping method based on page level mapping of the present invention.

[0047] The present invention uses the time locality and space locality of sequential I / O to set the first level cache L respectively 1 Cache, secondary cache L 2 Cache, using different cache management strategies on the two-level cache, the first-level cache L 1 Cache is used to cache a single address mapping item, the second level cache L 2 Cache is used to cache the entire mapped page; the spatial locality detection method is used on the secondary cache to detect a certain number of cu...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a dynamic two-level cache flash memory translation layer address mapping method based on page-level mapping, and solves the problems that the current page-level address mappingcannot fully utilize sequential I / O locality and the I / O performance is reduced when a large number of random I / Os exist. According to the method, time locality and space locality of sequential I / O are utilized; a first-level cache L1Cache and a second-level cache L2Cache are set respectively, different cache management strategies are adopted on the two-level cache, the first-level cache L1Cache is used for caching a single address mapping item, and the second-level cache L2Cache is used for caching the whole mapping page; detecting a certain number of current I / O requests on the second-levelcache by adopting a spatial locality detection method, and extracting a corresponding mapping page into the second-level cache L2Cache if detecting that the current I / O requests have relatively high spatial locality; and meanwhile, the first-level cache L1 Cache dynamically adjusts the size of the first-level cache according to the cache hit rate, so that the I / O performance in a random I / O mode is ensured.

Description

Technical field [0001] The invention belongs to the field of computer science and technology, and in particular relates to a dynamic L2 cache flash memory conversion layer address mapping method based on page level mapping. Background technique [0002] NAND Flash is widely used in many fields because of its low power consumption, non-volatility, and excellent IOPS characteristics, such as embedded devices and high-performance servers. Its I / O performance is much higher than that of traditional mechanical hard disks, and to a certain extent it makes up for the speed gap between CPU and I / O in computer systems. Its capacity is currently doubled every year. The structure of the traditional mechanical hard disk is completely different. NAND Flash supports random addressing. The page is the basic unit of reading and writing, and the block is the basic unit of erasing. It can only be updated in different places and cannot directly rewrite data. The current operating system supports t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F12/1009G06F12/0811
Inventor 阮利丁树勋肖利民苏书宾李昂鸿殷成涛
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products