Application defined multi-tiered wear-leveling for storage class memory systems
A wear leveling and application technology, applied in memory system, memory architecture access/allocation, data processing input/output process, etc., can solve the problem of high wear and tear
Pending Publication Date: 2020-10-16
HUAWEI TECH CO LTD
0 Cites 0 Cited by
AI-Extracted Technical Summary
Problems solved by technology
However, if a cell is repeatedly written and erased, th...
Method used
[0072] In the case that the application 111 does not allow sector-level wear leveling, the method 400 executes step 416. When the application 111 allows or requests sector-level wear leveling, after setting the sector-level wear leveling policy 130 for the application 111 , the method 400 also executes step 416 . In this way, the method 400 continues to determine whether to set another wear leveling policy 290 for the application program 111 according to different memory sizes. In ste...
Abstract
A method implemented by a memory device comprises obtaining, by a processor coupled to a memory, a wear-leveling policy from an application executable at the memory device, wherein the wear-leveling policy indicates a memory size by which to perform wear-leveling within an instance, wherein the instance comprises an address range assigned to the application in the memory of the memory device (703); obtaining, by a processor, a request to access the instance (706); and performing, by the processor, wear-leveling on a plurality of memory cells within the instance according to the wear-leveling policy (709).
Application Domain
Memory architecture accessing/allocationMemory systems +1
Technology Topic
Storage cellEngineering +5
Image
Examples
- Experimental program(1)
Example Embodiment
[0036] First, it should be understood that although illustrative implementations of one or more embodiments are provided below, the disclosed systems and/or methods can be implemented using any number of technologies (whether currently known or existing). The present disclosure should never be limited to the illustrative implementations, drawings and technologies described below, including the exemplary designs and implementations illustrated and described herein, but can be within the scope of the appended claims and their equivalents Modify within the full scope.
[0037] Wear leveling is a process of moving data so that the data is stored at different physical addresses in the memory at different times to prevent some storage units from being worn out earlier than others. Typical wear leveling methods are performed on memory in a coarse-grained manner, where thousands of bits change positions during one iteration of wear leveling. The typical wear leveling method also does not consider applications associated with data stored in or accessing the storage unit.
[0038] The embodiments of the present disclosure aim to enable applications to define wear leveling strategies based on application requirements or preferences. For example, to allocate an instance in memory for an application, where the instance is a memory address range. In one embodiment, the application program may define a wear leveling strategy that includes the memory size required to perform wear leveling within the instance. In this way, while implementing a more refined wear leveling method, wear leveling can be performed in an application-specific manner, thereby extending the life cycle of the memory system.
[0039] figure 1 It is a schematic diagram of a memory system 100 of a device or a host provided by an embodiment of the present disclosure. A single computing device (for example, user equipment (UE) or a server located in a central office (CO) of a network service provider) may include the memory system 100 as a part thereof. The memory system 100 of the device may include a user space layer 103, a kernel space layer 106, and a hardware memory layer 109. The user space layer 103 may include an application program 111, which may be a low-level application program executed by a user who operates the device. For example, the application program 111 may be a streaming media application, a social media application, an electronic communication application, a messaging application, or any other type of application executed by the device. For example, the kernel space layer 106 may be the operating system of the device, which interfaces with the user space layer 103 and the hardware memory layer 109 to provide services to users of the device. The kernel space layer 106 may include a memory mapping unit (MMU) module 113, and the memory mapping unit module 113 is configured to map virtual memory addresses to physical memory addresses. For example, when the application program 111 requests, the kernel space layer 106 may use the mapping table to convert the virtual address into the corresponding physical address. For example, the virtual address may be an address generated during the execution of the central processing unit (CPU) of the device. The virtual address is the address used by the application program 111 to access the hardware memory layer 109. The physical address of the hardware memory layer 109 refers to the hardware address of the physical storage unit in the hardware memory layer 109. The virtual address can be the same as or different from the corresponding physical address.
[0040] The hardware memory layer 109 may include a memory of the device, and the memory may be accessed by various application programs 111 that can be executed in the device. For example, the hardware memory layer 109 may be storage class memory (storage class memory, SCM), such as 3-dimensional (3-Dimensional, 3D) cross point (XPoint) memory, phase change random access memory (Random Access Memory, RAM) Or any resistive RAM. The hardware storage layer 109 may include multiple storage units, and each storage unit is used to store one bit of data. In some embodiments, the hardware memory layer 109 can be divided into sections according to various memory sizes, such as combining image 3 As described further. Generally, the application program 111 is allocated or the application program 111 requests the virtual address range of the memory location in the instance 115 or the hardware memory layer 109. The virtual address range may correspond to a physical address range in the hardware memory layer 109, where the physical address range includes one or more storage units in the hardware memory layer 109. When the application program 111 uses a read or write function to access the instance 115, the application program 111 may use a memory library 118 (for example, PMemLib) to interface with the MMU module 113. The MMU module 113 may provide access to the physical address of the hardware memory layer 109 in the instance 115 in response to the read or write function requested by the application program 111.
[0041] In the embodiment in which the hardware memory layer 109 is an SCM, the SCM also includes multiple units, and each unit may be the smallest physical unit for storing data. SCM is a non-volatile storage technology that uses low-cost materials, such as chalcogenides, perovskites, phase change materials, magnetic bubble technology, carbon nanotubes, and so on. SCM exhibits dynamic random access memory (Dynamic Random Access Memory, DRAM) performance, and its cost is lower than DRAM. The extrapolated cost can be equal to or less than the cost of an enterprise disk drive over time. SCM's price/performance ratio provides a storage hierarchy between DRAM main system memory and disk storage. This level of storage can be seen as a very large disk cache. Due to the non-volatile nature of storage-level memory, data can be permanently stored in it.
[0042] SCM is usually bit variable, similar to DRAM, which allows users or administrators to change data bit by bit. Therefore, SCM is suitable as a replacement/expansion of disk storage or system memory. However, unlike DRAM and disk drives, SCM technology provides a limited number of write cycles. Flash memory also exhibits this characteristic. Flash provides 10 3 To 10 6 Write cycles, while SCM technology supports 10 6 To 10 12 Write cycles.
[0043] When some units are effectively worn and other units are relatively unworn, the existence of worn units generally affects the overall performance of the memory system 100. In addition to the performance degradation associated with the worn out unit itself, when the number of unworn units available for storing required data is insufficient, the overall performance of the memory system 100 may be adversely affected. Generally, when there are a critical number of worn out units in the memory system 100, the memory system 100 can be regarded as unusable, even if many other units are relatively unworn.
[0044] In order to increase the likelihood that the cells in the memory system will wear fairly uniformly, a wear leveling operation is usually performed. Generally, the wear leveling operation allows to change the data stored in a specific cell associated with a specific address so that the same data is not always stored in the same cell. By changing the data stored in each unit, it is less likely that a particular unit will wear out before other units wear out.
[0045] A typical memory system maintains an address mapping table that stores the mapping between the physical address of the data and the virtual address (or logical address) of the data. Wear leveling is usually performed by periodically changing the physical address of the data without changing the logical address of the data. However, typical wear leveling methods are performed on memory in a coarse-grained manner, where thousands of bits change positions during one iteration of wear leveling. For example, one iteration of wear leveling involves changing the positions of thousands of bits (for example, 4000 (4K) bytes or 16000 (16K) bytes) to prevent uneven cell wear. The typical wear leveling method also does not consider application programs 111 associated with data stored in or accessing the storage unit.
[0046] According to various embodiments, the memory system 100 is configured to perform wear leveling based on the policies specified by different application programs 111 and according to different layers of memory storage in the hardware memory layer 109. Therefore, the memory system 100 further includes a software wear leveling module 121 and a hardware wear leveling module 124. In an embodiment, the software wear leveling module 121 interfaces with the memory bank 118 so that the application program 111 can specify one or more wear leveling strategies 130. In an embodiment, the wear leveling strategy 130 defines a manner of performing wear leveling in the instance 115 of the application program 111 allocated to the hardware memory layer 109. As will be further described below, the wear leveling strategy 130 may instruct to perform wear leveling according to the memory size in the instance 115, for example, based on nibble or bit level, code word level and/or sector level. In this way, the wear leveling strategy 130 may be a set of software and/or hardware executable instructions defined by the application program 111, and the hardware wear leveling module 124 executes the instructions to perform wear leveling on the application program 111. In an embodiment, the hardware wear leveling module 124 may be executed in the hardware memory layer 109 to perform wear leveling in the instance 115 according to the wear leveling strategy 130. For example, the wear leveling strategy 130 may be a set of instructions stored in the memory system 100, so that the processor of the memory system is used to receive the stored instructions and combine them according to the methods 400, 500, 600, and 700 described below. Execute the stored instructions.
[0047] In some embodiments, the software wear leveling module 121 receives the wear leveling strategies 130 of different applications 111A to 111C. Such as figure 1 As shown, the application program 111A may configure two wear leveling strategies 130A and 130B for the instance 115 corresponding to the application program 111A. Different wear leveling strategies 130A and 130B indicate different levels, and different levels of wear leveling should be performed on the instance 115 corresponding to the application program 111A. For example, the application program 111A may set the wear leveling policy 130A to instruct the memory system 100 to perform wear leveling on the instance 115 corresponding to the application program 111A. Similarly, the application program 111A may set the wear leveling policy 130A to instruct the memory system 100 to perform sector-level wear leveling on the instance 115 corresponding to the application program 111A. The application program 111B may similarly configure the wear leveling policy 130C for the instance 115 corresponding to the application program 111B, and the application program 111C may similarly configure the wear leveling policy for the instance 115 corresponding to the application program 111C. Balance strategy 130D to 130F. It should be understood that the instance 115 corresponding to each application program 111A to 111C may be a different virtual address range in the hardware memory layer 109.
[0048] In some embodiments, the software wear leveling module 121 receives one or more cross-instance wear leveling policies 135 from various applications 111. The cross-instance wear leveling policy 135 may be a policy that indicates whether the application 111 is allowed to perform wear leveling together with the instance 115 and other instances 115 assigned to other applications 111, as will be further described below Like that. Such as figure 1 As shown, the cross-instance wear leveling strategy 135A( figure 1 The X-WL policy shown in) indicates that the application 111A has set a cross-instance wear leveling policy, and the cross-instance wear leveling policy 135C indicates that the application 111C has set a cross-instance wear leveling policy. In one embodiment, cross-instance wear leveling is performed by swapping, switching, or changing the memory location of the instance 115 allocated to the application program 111A and the memory location of the instance 115 allocated to the application program 111C. According to whether the application program sets a cross-instance wear leveling policy that allows cross-instance wear leveling, cross-instance wear leveling can be performed among multiple instances 115 for different applications 111. According to some embodiments, the application program 111 may customize how to perform wear leveling on storage units within the corresponding instance 115 and across other instances 115.
[0049] The use of the application-defined wear leveling strategy 130 as disclosed herein is more advantageous than previous wear leveling methods. This is because the use of the wear leveling strategy 130 defined by the application program enables the application program 111 to customize how to perform wear leveling on the memory segment allocated to the application program. This kind of customization based on applications is beneficial because each application can access different types of data, and these data should be wear-leveled differently. For example, suppose that a single device executes multiple applications, where the first application frequently stores, updates, and accesses user data, and the second application has higher durability requirements, and the stored data should not be damaged. Data associated with the second application. Previous wear leveling methods did not consider application characteristics or requirements when performing wear leveling on the device's memory. The embodiments of the present disclosure enable applications to customize the wear leveling method of data associated with different applications. For example, the first application program may implement a multi-level wear leveling strategy through the embodiments disclosed herein, so that the storage units associated with the application program are uniformly worn. The second application may not allow wear leveling of data associated with the second application, or may not allow cross-instance wear leveling to prevent data corruption. In addition, the embodiments of the present disclosure can perform wear leveling on the memory more finely. In this manner, smaller memory segments perform wear leveling with each other based on the wear leveling strategy 130. Therefore, the embodiments of the present disclosure enable a memory system (such as the memory system 100) to have a longer lifespan than a memory system using a traditional wear leveling method.
[0050] figure 2 This is a diagram of an embodiment of a storage device 200 (for example, a device including the memory system 100). A single computing device (e.g., a UE or a server located in a CO of a network service provider) may include the storage device as a part thereof. In this way, the application-defined wear leveling embodiments disclosed herein are suitable for small-scale computing devices operated by end users or large-scale cloud computing devices operated by service providers. The storage device 200 may be used to implement and/or support the application-defined wear leveling mechanism described herein. The storage device 200 may be implemented in a single node, or the functions of the storage device 200 may be implemented in multiple nodes. Those skilled in the art will recognize that the term "storage device" includes a wide range of devices, and the storage device 200 is only an example. The storage device 200 is included for clarity of discussion, but does not mean that the application of the present disclosure is limited to specific storage device embodiments or categories of storage device embodiments. At least some of the features and/or methods described in this disclosure may be implemented in a network device or module (for example, the storage device 200). For example, the features/methods in the present disclosure can be implemented using hardware, firmware, and/or software installed and running on hardware. Such as figure 2 As shown, the storage device 200 includes: one or more ingress ports 210 and a receiving unit (Rx) 220 for receiving data; at least one processor, logic unit or central processing unit (central processing unit) for processing the data processing unit (CPU) 205; a transmitter unit (Tx) 225 and one or more output ports 230 for sending the data; and a memory 250 for storing the data.
[0051] The processor 205 may include one or more multi-core processors and be coupled to a memory 250, and the memory 250 may be used as a data storage, a buffer, and the like. The processor 205 may be implemented as a general-purpose processor, or may be a part of one or more application specific integrated circuits (ASIC) and/or a digital signal processor (DSP). The processor 205 may include a wear leveling strategy module 255 that can perform the processing functions of the software wear leveling module 121 and implement the methods 400, 500, 600, and 700 discussed more fully below, and/or discussed herein Any other method. Therefore, the inclusion of the wear leveling strategy module 255 and the associated method and system provide improvements to the functions of the storage device 200. Further, the wear leveling strategy module 255 converts a specific item (for example, the network) into a different state. In an alternative embodiment, the wear leveling strategy module 255 may be implemented as instructions stored in the memory 250, and the instructions may be executed by the processor 205. In some embodiments, the processor 205 executes the wear leveling strategy module 255 to implement the methods 400, 500, 600, and 700, so that when the wear leveling strategy 130 is implemented on the instance 115, the processor 205 receives the instruction set corresponding to the wear leveling strategy 130 from the memory 250, and executes the instruction set corresponding to the wear leveling strategy 130. In this way, implementing the wear leveling strategy 130 refers to receiving and executing an instruction set corresponding to the wear leveling strategy 130.
[0052] The memory 250 may be similar to the hardware memory layer 109 and implemented as an SCM, as described above. The memory 250 may include additional memory, including a cache for temporarily storing content, such as random-access memory (RAM). In addition, the memory 250 or the additional memory may include a long-term memory for storing content for a longer period of time, such as read-only memory (ROM). For example, the cache and long-term storage may include dynamic RAM (DRAM), solid-state drive (SSD), hard disk, or a combination thereof. In one embodiment, the memory 250 may include a nibble/bit level write count 260, a codeword level write count 270, a sector level write count 280, an instance level write count 290, and a wear leveling strategy 130. , Mapping table 295, wear leveling threshold 297, and write count threshold 298.
[0053] The write count or word count refers to the number of times one or more memory cells are accessed (for example, write, read, etc.). Combine as below image 3 As described, the hardware memory layer 109 can be divided into various memory sizes, such as bits, nibbles, code words, sectors, and/or the instance 115. In this way, the nibble/bit-level write count 260 may be the write count of a specific bit or nibble in the example 115. The codeword-level write count 270 may be the write count of the codeword in the instance 115. The sector-level write count 280 may be the write count of the sector within the example 115. The instance-level write count 290 may be the write count of the entire instance 115. Among them, the wear leveling threshold 297 is the minimum value that the write count should reach before performing wear leveling.
[0054] The write count threshold 298 is the minimum value that the write count should reach before performing a carry on the higher-level write count. In one embodiment, each memory size within the instance 115 may have a write count, and each memory size may be associated with a different write count threshold 298. For example, each bit can have one write count, each nibble can have another write count, each codeword can have another write count, and each sector can have another write count. The bit can be the lowest level of the memory size and therefore has the lowest level of write count and write count threshold 298. Nibble can be the next level of the memory size, and therefore has a higher write count and write count threshold 298 relative to the bit level. The codeword can be the next level of the memory size, and has a higher write count and write count threshold 298 relative to the nibble level. The sector can be the next level of the memory size, and has a higher write count and write count threshold 298 relative to the codeword level.
[0055] In some embodiments, the write count of each stage and the write count threshold of each stage are related to each other. For example, when the write count of the lower-level memory size meets the write count threshold 298 of the memory size, the write count of the higher-level memory size is incremented. As an illustrative example, assume that the write count threshold 298 of the nibble-level write count is 100. When the write count of the nibble level reaches 100, the write count of a higher memory size (ie, the write count of the codeword level) will be incremented by one. When the write count of the higher-level memory size is incremented according to the write count of the lower-level memory size that reaches the write count threshold 298, a carry to the higher-level write count occurs.
[0056] The wear leveling strategy 130 includes a strategy specified by each of the application programs 111 regarding the memory size required to perform wear leveling in the instance 115 corresponding to the application program 111. In some embodiments, the wear leveling strategy 130 may include a library of executable codes or instructions corresponding to one or more wear leveling strategies 130 set by different application programs 111. When the wear leveling strategy 130 is implemented, the processor 205 may obtain (or fetch) executable codes or instructions corresponding to the wear leveling strategy 130 from the memory 250, and then execute the codes or instructions. The mapping table 295 includes a mapping of virtual addresses (or logical addresses) of data to corresponding physical addresses in the memory 250.
[0057] It should be understood that by programming and/or loading executable instructions onto the storage device 200, at least one of the processor 205 and/or the memory 250 is changed, thereby partially converting the storage device 200 into a A specific machine or device that discloses the novel functions taught, for example, a multi-core forwarding architecture. What is fundamental to electrical engineering and software engineering technology is that the functionality that can be implemented by loading executable software into a computer can be converted into a hardware implementation method through familiar design rules. The decision between implementing concepts in software or hardware usually depends on the stability of the design and the number of units to be produced, and has nothing to do with any issues involved in the conversion from the software domain to the hardware domain. Generally, designs that still need to be changed frequently can be preferably implemented in software, because the implementation of hardware changes is more expensive than the software design. Generally, a stable and mass-production design is more suitable for hardware (for example, ASIC) implementation, because mass production running the hardware implementation is cheaper than software implementation. Generally, the design can be developed and tested in the form of software, and then converted into an equivalent hardware implementation in an ASIC through known design rules, and the ASIC hardwareizes the software instructions. In the same way as the new ASIC-controlled machine is a specific machine or device, similarly, a computer that has been programmed and/or loaded with executable instructions can be regarded as a specific machine or device.
[0058] image 3 A schematic diagram of different memory sizes 300 required to perform wear leveling provided by various embodiments of the present disclosure. As discussed above, wear leveling is a process of moving data so that the data is stored at different physical addresses in the memory at different times to prevent some storage units from being worn out before others. The mapping table 295 keeps track of the data because the data is moved to a different location during each iteration of wear leveling. For example, data may generally have the same virtual address, which is the address used by the application program 111 to access the data, even though the actual physical address of the data may change after performing wear leveling.
[0059] The hardware memory layer 109 may be an SCM, which may be divided into different sizes of memory, for example image 3 Memory size shown. image 3 The first memory size shown is bit level 301, which includes one bit of data that can be stored in a storage unit in the hardware memory layer 109. When the wear leveling policy 130 instructs the memory system 100 to perform wear leveling on the bit level 301, the physical address (ie, location) of one or more bit data is changed in the instance 115. When the physical address changes, the virtual addresses of the relocated bits of data can remain unchanged. The mapping table 295 can be updated to reflect the changed physical addresses of these bit data.
[0060] image 3 The second memory size shown is nibble level 303, including nibble. Nibbles can be a collection of four bits of data, which can be stored in four cells configured to store one bit per cell, two cells configured to store two bits per cell, or configured to store each cell In a unit of 4 bits. When the wear leveling policy 130 instructs the memory system 100 to perform wear leveling on the nibble level 303, the physical address (ie, location) of one or more nibble data is changed in the instance 115. When the physical address changes, the virtual address of the relocated nibble data can remain unchanged. The mapping table 295 can be updated to reflect the changed physical addresses of these nibble data.
[0061] image 3 The third memory size shown is the codeword level 306 containing codewords. The codeword can be any number of bits as long as it includes user bits and a single independent Error-Correcting Code (ECC) bit set. One codeword can be about 32 bytes (B) to 256B. User bits are data bits, and ECC bits are bits used to perform error detection and correction on user bits. The codeword can be represented by a single virtual address and/or physical address, each bit in the codeword can be represented by a virtual address, and each memory cell storing the bits of the codeword can be represented by a physical address. When the wear leveling policy 130 instructs the memory system 100 to perform wear leveling on the codeword level 306, the physical address (ie, the location of one or more codewords of user bits and ECC bits) is changed in the instance 115. ). When the physical address changes, the virtual address of the relocated codeword data can remain unchanged. The mapping table 295 can be updated to reflect the changed physical addresses of these codeword data.
[0062] image 3 The fourth memory size shown is sector level 309 including sectors. The sector can be any number of bits and can be preset by the administrator of the memory system 100. The sectors can be managed by the memory system 100 using a conversion table such as a flash memory conversion table or a page mapping table. A sector can include kilobytes (KB) to megabytes (MB) of data. The sector can be represented by a single virtual address and/or physical address, each bit in the sector can be represented by a virtual address, and each storage unit storing the bits of the sector can be represented by a physical address. When the wear leveling policy 130 instructs the memory system 100 to perform wear leveling on the sector level 309, the physical addresses (ie, locations) of one or more sectors are changed in the instance 115. When the physical address changes, the virtual address of the relocated sector data can remain unchanged. The mapping table 295 can be updated to reflect the changed physical addresses of the sector data.
[0063] image 3 The fifth memory size shown is the instance level 311, which corresponds to the instance 115. As described above, the instance 115 is the number of bits corresponding to the virtual address range and the physical address range allocated to the specific application 111. The application program 111 corresponding to the instance 115 usually accesses the storage unit in the instance 115 as needed. When the cross-instance wear leveling policy 135 instructs the application 111 to allow cross-instance wear leveling, the entire instance 115 changes or exchanges positions with another instance 115 corresponding to another application 111. In fact, the data corresponding to the first application 111 and the data corresponding to the second application 111 exchange positions.
[0064] image 3 The memory size 300 shown in (including bit level 301, nibble level 303, code word level 306, sector level 309, and instance level 311) is an example of the memory size that can be divided into the instance 115. It should be understood that the instance 115 can be divided into any other memory size specified by the administrator of the memory system 100.
[0065] Figure 4 A method 400 for multi-layer wear leveling defined by the application program 111 provided by an embodiment of the present disclosure is shown. For example, the method 400 may be implemented by the software wear leveling module 121, the hardware wear leveling module 124, and/or the wear leveling strategy module 255. In one embodiment, the method 400 may be implemented when the instance 115 is created for the application 111, and the wear leveling strategy 130 may be instantiated for the application 111.
[0066] In step 403, the instance 115 may be created for the application 111. For example, the processor 205 may create the instance 115 for the application program 111. For example, the application program 111 may request the memory size, the number of storage units, or the virtual address memory range to obtain the instance 115 corresponding to the application program 111. The instance 115 may include any number of storage units, and may include one or more bits, nibbles, codewords, and/or sectors. Each application program 111 may be assigned a corresponding instance 115, so that each application program 111 has a different instance 115, and only the application program 111 may be allowed to access the storage unit in the assigned instance 115. The instance 115 may correspond to a physical address range, which may change after each cross-instance wear leveling iteration.
[0067] In step 406, it is determined whether the application 111 allows cross-instance wear leveling. For example, the wear leveling strategy module 255 may determine whether the application 111 allows cross-instance wear leveling. For example, the software wear leveling module 121 may determine whether the cross-instance wear leveling policy 135 indicates that the application program 111 allows cross-instance wear leveling. In an embodiment, the application 111 may set a cross-instance wear leveling policy that allows or prohibits cross-instance wear leveling based on the QoS or durability requirements of the application 111.
[0068] In step 409, when the application 111 allows cross-instance wear leveling, the cross-instance wear leveling policy 135 is updated to include the application 111. For example, when the application program 111 allows cross-instance wear leveling, the wear leveling policy module 255 updates the cross-instance wear leveling policy 135 to include the application program 111.
[0069] When the application 111 does not allow cross-instance wear leveling, the method 400 executes step 411. When the application program 111 allows cross-instance wear leveling, after the cross-instance wear leveling policy 135 is updated to include the application program 111, the method 400 further executes step 411. In this way, after determining whether to set a high-level wear leveling strategy, the method 400 continues to determine whether to set a low-level wear leveling strategy.
[0070] In step 411, it is determined whether the application 111 allows sector-level wear leveling. For example, the wear leveling policy module 255 determines whether the application 111 allows or requests sector-level wear leveling. Sector-level wear leveling refers to performing wear leveling by changing the position of one or more sector levels 309 in the instance 115. This can prevent one sector stage 309 in the example 115 from wearing out before another sector stage 309 in the example 115.
[0071] In step 413, when the application 111 allows or requests sector-level wear leveling, a sector-level wear leveling policy 130 is set for the application 111. For example, the wear leveling strategy module 255 sets a sector level wear leveling policy 130 for the application program 111 when the application program 111 allows or requests sector level wear leveling.
[0072] In the case where the application 111 does not allow sector-level wear leveling, the method 400 executes step 416. When the application program 111 permits or requests sector-level wear leveling, after setting the sector-level wear leveling policy 130 for the application program 111, the method 400 further executes step 416. In this way, the method 400 continues to determine whether to set another wear leveling strategy 290 for the application program 111 according to different memory sizes. In step 416, it is determined whether the application 111 allows codeword level wear leveling. For example, the wear leveling policy module 255 determines whether the application 111 allows or requests codeword level wear leveling. Codeword level wear leveling refers to performing wear leveling by changing the position of one or more codeword levels 306 in the instance 115. This can prevent one codeword level 306 from wearing out before another codeword level 306 within the instance 115.
[0073] In step 419, when the application program 111 allows or requests codeword level wear leveling, a codeword level wear leveling strategy 130 is set for the application program 111. For example, the wear leveling strategy module 255 sets a codeword level wear leveling strategy 130 for the application program 111 when the program 111 allows or requests codeword level wear leveling.
[0074] In the case that the application 111 does not allow codeword level wear leveling, the method 400 executes step 421. When the application program 111 allows or requests codeword level wear leveling, after setting the codeword level wear leveling policy 130 for the application program 111, the method 400 further executes step 421. In this way, the method 400 continues to determine whether to set another wear leveling strategy 290 for the application program 111 according to different memory sizes. In step 421, it is determined whether the application 111 allows nibble and/or bit level wear leveling. For example, the wear leveling policy module 255 determines whether the application 111 allows or requests nibble and/or bit level wear leveling. Nibble-level wear leveling refers to performing wear leveling by changing the position of one or more nibble levels 303 in the instance 115. Bit-level wear leveling refers to performing wear leveling by changing the position of one or more bit levels 301 in the instance 115. This can prevent one nibble level 303 or bit level 301 before another nibble level 303 or bit level 301 in the example 115 from being worn out.
[0075] In step 423, when the application 111 allows or requests the nibble and/or bit-level wear leveling policy 130, the nibble and/or bit level wear leveling policy 130 is set for the application 111. For example, the wear leveling strategy module 255 sets a nibble and/or bit level wear leveling strategy 130 for the application program 111 when the application program 111 allows or requests to perform nibble and/or bit level wear leveling. .
[0076] Figure 5A with Figure 5B The method 500 for multi-layer wear leveling defined by the application program 111 provided by an embodiment of the present disclosure is shown. For example, the method 500 may be implemented by the software wear leveling module 121, the hardware wear leveling module 124, and/or the wear leveling strategy module 255. In one embodiment, after the wear leveling policy 130 is established for the application 111, the method 500 may be implemented when the application 111 requests to access the instance 115 corresponding to the application 111.
[0077] In step 503, a request for accessing the instance 115 corresponding to the application 111 is received from the application 111. For example, the processor 205 may receive a request to access the instance 115. For example, the request for accessing the instance 115 may be a write request including a virtual address of a nibble stored in the instance 115. The request to access the instance 115 can also be a read, erase, replace, or any other function that can be performed using a virtual address of a half byte. In step 506, the nibble/bit level write count 260 for the nibble included in the request for accessing the instance 115 may be incremented. For example, the processor 205 increments the nibble/bit level write count 260 of the nibble within the instance 115 that is accessed.
[0078] In step 509, it is determined whether the application 111 has set a nibble level wear leveling strategy 130. For example, the wear leveling strategy module 255 determines whether the application 111 has set the nibble level wear leveling strategy 130. In step 511, when the application 111 has set the nibble level wear leveling policy 130, it is determined whether the nibble/bit level write count 260 of the accessed nibble meets the wear leveling threshold 297 . For example, the wear leveling strategy module 255 determines whether the nibble/bit level write count 260 of the accessed nibble meets the wear leveling threshold 297. The wear leveling threshold 297 is a threshold write count that should be met before performing wear leveling. Since wear leveling is usually associated with resource and time costs, wear leveling should not be performed frequently. The wear leveling threshold 297 can facilitate performing wear leveling in a more efficient manner when possible, while saving resources. When the application 111 does not set the nibble-level wear leveling policy 130, the method 500 executes step 518 to determine whether the nibble/bit-level write count 260 of the accessed nibble meets the write count Threshold 298. Even if nibble-level wear leveling is not performed, the method 500 can still determine whether the advanced wear leveling write count meets the corresponding write count threshold 298.
[0079] In step 513, when the nibble/bit-level write count 260 of the accessed nibble meets the wear leveling threshold 297, perform nibble level wear leveling on one or more nibbles in the instance 115 . For example, in response to determining that the nibble/bit-level write 260 of the accessed nibble satisfies the wear leveling threshold 297, the wear leveling policy module 255 responds to one or more nibbles in the instance 115 Perform nibble level wear leveling. Nibble-level wear leveling may be performed by changing the position of one or more nibble-level 303 within the instance 115 corresponding to the application program 111. In one embodiment, the mapping table 295 may be updated to reflect the changed position of the nibble within the instance 115.
[0080] When the nibble/bit-level write count 260 of the accessed nibble does not meet the wear leveling threshold 297, the method 500 executes step 518. The method 500 further performs step 518 after performing nibble-level wear leveling. In step 518, it is determined whether the nibble/bit-level write count 260 of the accessed nibble meets the write count threshold 298. For example, the wear leveling policy module 255 determines whether the nibble/bit-level write count 260 of the accessed nibble meets the write count threshold 298. The write count threshold 298 is the minimum value that the write count should reach before determining whether to perform a carry on the higher-level write count. If the nibble/bit level write count 260 of the accessed nibble does not meet the write count threshold 298, the method 500 executes step 523 to determine whether the application 111 has set codeword level wear leveling Strategy 130.
[0081] In step 521, when the nibble/bit level write count 260 of the accessed nibble meets the write count threshold 298, the codeword level write count 270 is incremented. For example, the wear leveling strategy module 255 increments the codeword level write count 270. In step 523, it is determined whether the application 111 has set a codeword level wear leveling strategy 130. For example, the wear leveling strategy module 255 determines whether the application 111 has set a codeword level wear leveling strategy 130. In step 525, when the application program 111 has set the codeword level wear leveling policy 130, it is determined whether the codeword level write count 270 of the accessed codeword meets the wear leveling threshold 297. For example, the wear leveling strategy module 255 determines whether the codeword level write count 270 of the accessed codeword meets the wear leveling threshold 297. When the codeword level write count 270 of the accessed codeword meets the wear leveling threshold 297, the method 500 executes block A to implement Figure 5B The steps shown in. When the codeword-level write count 270 of the accessed codeword does not meet the wear leveling threshold 297, the method 500 executes block B and also implements Figure 5B Step 531 shown in. When the application program 111 does not set the codeword level wear leveling strategy 130, the method 500 is executed Figure 5B In step 531, it is determined whether the codeword-level write count 270 of the accessed codeword meets the write count threshold 298. This is because the codeword level write count 270 can still be checked before determining whether to perform wear leveling on higher levels.
[0082] Figure 5B It is a continuation of the method 500 that starts after determining whether the codeword level write count 270 of the accessed codeword meets the wear leveling threshold 297. In step 528, when the codeword level write count 270 of the accessed codeword meets the wear leveling threshold 297, codeword level wear leveling is performed on one or more codewords in the instance 115. For example, in response to determining that the codeword level write 270 of the accessed codeword meets the wear leveling threshold 297, the wear leveling strategy module 255 performs codeword level wear on one or more codewords in the instance 115 balanced. The codeword level wear leveling can be performed by changing the location of one or more codeword levels 306 within the instance 115 corresponding to the application program 111. In one embodiment, the mapping table 295 may be updated to reflect the changed position of the codeword within the instance 115.
[0083] When the codeword level write count 270 of the accessed codeword does not meet the wear leveling threshold 297, the method 500 executes step 531. The method 500 further executes step 531 after executing the codeword level wear leveling. In step 531, it is determined whether the codeword-level write count 270 of the accessed codeword meets the write count threshold 298. For example, the wear leveling strategy module 255 determines whether the codeword-level write count 270 of the accessed codeword meets the write count threshold 298. If the codeword-level write count 270 of the accessed codeword does not meet the write count threshold 298, the method 500 executes step 523 to determine whether the application 111 has set the sector-level wear leveling policy 130.
[0084] In step 533, when the codeword level write count 270 of the accessed codeword meets the write count threshold 298, the sector level write count 280 is incremented. For example, the wear leveling strategy module 255 increments the sector-level write count 280. In step 536, it is determined whether the application 111 has set a sector-level wear leveling strategy 130. For example, the wear leveling strategy module 255 determines whether the application 111 has set a sector-level wear leveling strategy 130. In step 539, when the application 111 has set the sector-level wear leveling policy 130, it is determined whether the sector-level write count 280 of the accessed sector meets the wear leveling threshold 297. For example, the wear leveling strategy module 255 determines whether the sector-level write count 280 of the accessed sector meets the wear leveling threshold 297. When the application 111 does not set the sector-level wear leveling policy 130, the method 500 executes step 544 to determine whether the sector-level write count 280 of the accessed sector meets the write count threshold 298.
[0085] In step 541, when the sector-level write count 280 of the accessed sector meets the wear leveling threshold 297, sector level wear leveling is performed on one or more sectors in the instance 115. For example, in response to determining that the sector-level write 280 of the accessed sector meets the wear-leveling threshold 297, the wear leveling strategy module 255 performs sector level wear on one or more sectors in the instance 115 balanced. The sector-level wear leveling can be performed by changing the location of one or more sector-level 309 within the instance 115 corresponding to the application 111. In one embodiment, the mapping table 295 may be updated to reflect the changed positions of sectors within the instance 115.
[0086] When the sector-level write count 280 of the accessed sector does not meet the wear leveling threshold 297, the method 500 executes step 544. The method 500 further executes step 544 after performing sector-level wear leveling. In step 544, it is determined whether the sector-level write count 280 of the accessed sector meets the write count threshold 298. For example, the wear leveling policy module 255 determines whether the sector-level write count 280 of the accessed sector meets the write count threshold 298. In step 548, when the sector-level write count 280 of the accessed sector meets the write count threshold 298, the instance-level write count 290 is incremented. For example, the wear leveling strategy module 255 increments the instance-level write count 290.
[0087] In some embodiments, the method 500 may loop back and start at step 509 again at any time during the access to the instance 115. Although the method 500 only describes wear leveling according to the nibble level 303, the codeword level 306, and the sector level 309, it should be understood that the application program 111 can be based on any memory size in the instance 115. Define the wear leveling to be performed. For example, the method 500 may also be used to perform wear leveling according to the bit level 301 when the application program 111 has set the bit level wear leveling strategy 130. In this way, the hardware wear leveling module 124 is executed in the hardware memory layer 109 to perform wear leveling according to any memory size in a manner similar to that shown in the method 500.
[0088] Image 6 The method 600 for multi-layer wear leveling defined by the application 111 provided by an embodiment of the present disclosure is shown. For example, the method 600 may be implemented by the software wear leveling module 121, the hardware wear leveling module 124, and/or the wear leveling strategy module 255. In one embodiment, the method 600 may be implemented when the application 111 sets the cross-instance wear leveling policy 135.
[0089] In step 601, it is determined whether the application 111 has set the cross-instance wear leveling policy 135. For example, the wear leveling strategy module 255 determines whether the application 111 has set the cross-instance wear leveling strategy 135. In step 603, it is determined whether more than one application 111 has set the cross-instance wear leveling policy 135. For example, the wear leveling strategy module 255 determines whether more than one application 111 has set the cross-instance wear leveling strategy 135. Since the cross-instance wear leveling involves changing the physical address of the data of two instances 115, at least two applications 111 should set the cross-instance wear leveling policy 135. If at least two applications 111 have not set the cross-instance wear leveling policy 135, the method 600 ends.
[0090] In step 606, it is determined whether the instance-level write count 290 of the accessed instance 115 meets the wear leveling threshold 297. For example, the wear leveling policy module 255 determines whether the instance-level write count 290 of the accessed sector meets the wear leveling threshold 297. In step 609, when the instance-level write count 290 of the accessed sector meets the wear leveling threshold 297, cross-instance execution is performed between the accessed instance 115 and another instance 115 corresponding to another application 111 Well balanced. For example, the wear leveling strategy module 255 performs cross-instance wear leveling between the instance 115 and another instance 115.
[0091] Figure 7 A method 700 for multi-layer wear leveling defined by the application program 111 is provided by an embodiment of the present disclosure. The method 700 may be implemented by the software wear leveling module 121 and the hardware wear leveling module 124. The method 700 may also be implemented by the wear leveling strategy module 255. The method 700 may be implemented when the application 111 sets one or more wear leveling policies 130 at the storage device 200.
[0092] In step 703, the wear leveling policy 130 is obtained from the application program 111 executed in the storage device 200. For example, the wear leveling strategy module 255 and/or the software wear leveling module 121 obtains the wear leveling strategy 130 from the application program 111. For example, the wear leveling strategy 130 may be a set of executable instructions stored in the memory 250 of the storage device 200. In an embodiment, the wear leveling strategy 130 may receive a set of executable instructions corresponding to the wear leveling strategy 130 from the memory 250. In an embodiment, the wear leveling strategy module 255 and/or the software wear leveling module 121 may load a set of executable instructions or program codes corresponding to the wear leveling strategy 130 from the memory 250. In some embodiments, acquiring the wear leveling strategy 130 may refer to receiving an instruction corresponding to the wear leveling strategy 130 from the memory 250. In one embodiment, the wear leveling strategy 130 indicates the memory size required to perform wear leveling in the instance 115. In one embodiment, the instance 115 includes the address range allocated to the application program 111 in the memory 250 of the storage device 200. The address range may be a virtual address range or a physical address range.
[0093] In step 706, a request for accessing the instance 115 is obtained. For example, the wear leveling strategy module 255 and/or the software wear leveling module 121 obtains a request for accessing the instance 115 from the application program 111. For example, the request for accessing the instance 115 may be a write request or a read request from the client. In step 709, wear leveling is performed on the multiple storage units in the instance 115 according to the wear leveling strategy 130. For example, the wear leveling strategy module 255 and/or the software wear leveling module 121 performs wear leveling on the multiple storage units in the instance 115 according to the wear leveling strategy 130.
[0094] Picture 8 A table 800 is shown that illustrates how different examples of storage structure, data structure, and access patterns use the wear leveling strategy 130 disclosed herein. As shown in table 800, column 803 shows different examples of storage structures. Column 806 shows different data structures and access modes corresponding to the storage structure. Column 809 shows different memory sizes corresponding to the storage structure. Column 811 shows the different durability requirements corresponding to the storage structure. Columns 812, 815, 818, and 821 show different wear leveling strategies 130 that can be set for the corresponding storage structure according to the data structure, access mode, memory size, and durability requirements.
[0095] Different rows in the table 800 correspond to different examples of storage structures with different data structures, access patterns, memory sizes, and durability requirements. As shown in table 800, row 825 is the read cache metadata storage structure, row 828 is the read cache data storage structure, row 831 is the write log storage structure, and row 834 is the write cache data storage structure. As shown in table 800, each of these storage structures can set different level wear leveling strategies 130 according to the data structure, access mode, memory size, and durability requirements corresponding to the storage structure. For example, write log and write cache data storage structures have high durability requirements, which means that the application program 111 that uses these types of storage structures does not want to damage the data stored in these storage structures. Therefore, storage structures with high durability requirements do not allow cross-instance wear leveling (as shown in column 821).
[0096] Picture 9 Shown is a table 900 that further illustrates how different examples of storage structures with different memory usage, access patterns, memory sizes, and QoS requirements use the wear leveling strategy 130 disclosed herein. As shown in Table 9, column 903 shows different examples of memory usage and access patterns that can be used by the application 111. Column 906 shows different memory sizes corresponding to the storage structure and/or the application program 111 that uses the storage structure. Column 909 shows different QoS requirements of the application program 111 corresponding to the storage structure and/or using the storage structure. Columns 911, 912, 915, and 918 show different wear leveling policies 130 that can be set for the corresponding storage structure and/or the application 111 according to memory usage, access mode, memory size, and QoS requirements.
[0097] The different rows 930 to 934 in the table 900 may correspond to different examples of virtual machines running the application program 111 with different memory usage, access patterns, memory size, and QoS requirements. As shown in table 900, each of these virtual machines can set different levels of wear leveling policies 130 according to the memory usage, access mode, memory size, and QoS requirements of the virtual machine and/or the application program 111. For example, a virtual machine with instruction/read-only, executable, byte-sized access (as shown in row 921) has high QoS requirements, which means that the data stored by the virtual machine cannot be moved. Therefore, the wear leveling policy 130 is not set for the virtual machine with high QoS requirements. In contrast, a virtual machine running an application 111 with moderate QoS requirements may include different types of wear leveling policies 130, depending on memory usage or access patterns.
[0098] Figure 10 shows one or more of the methods described herein (e.g. Figure 7 The method 700) of the device 1000. The device 1000 includes a device 1002 for obtaining a wear leveling strategy from an application program executable in a storage device. In one embodiment, the wear leveling strategy indicates the memory size required to perform wear leveling in the instance. In one embodiment, the instance includes an address range allocated to an application in the memory of the storage device. The device also includes a device 1004 for obtaining a request for accessing the instance. The device also includes a device 1006 for performing wear leveling on a plurality of storage units in the instance according to a wear leveling strategy.
[0099] As disclosed herein, a multi-layer wear leveling scheme can be implemented at different layers, such as bit level 301, nibble level 303, codeword level 306, sector level 309, or instance level 311. The write count-based function disclosed herein provides a mechanism for changing the rotation frequency and incrementing the write count of the previous layer within each level of wear leveling. The mechanism defined by the application program 111 enables the application program to select or bypass certain layers when performing wear leveling to provide more durability to the system, and enables the application program to freely execute within the instance according to the durability and QoS requirements. Define wear leveling.
[0100] According to one aspect of the present disclosure, there is provided a system, the system comprising: an acquisition module or device for acquiring a wear leveling strategy from an application executable in a storage device, wherein the wear leveling strategy is indicated in the example The size of the memory required to perform wear leveling within the storage device, the instance includes the address range allocated for the application in the memory of the storage device; an acquisition module or device for obtaining a request to access the instance; and A wear leveling module or device for performing wear leveling on a plurality of storage units in the instance according to a wear leveling strategy.
[0101] Optionally, in any of the foregoing aspects, another implementation of the aspect provides that the memory size is based on at least one of bits, nibbles, codewords, or sectors.
[0102] Optionally, in any one of the foregoing aspects, another implementation manner of the aspect provides that the device for performing wear leveling on the meaning of the multiple storage units according to the wear leveling strategy includes A device for moving the data of the memory size stored in one storage unit to different storage units among the multiple storage units.
[0103] Optionally, in any one of the above aspects, another implementation of the aspect provides that the system further includes a method for determining whether the write count of the address range associated with the memory size is greater than or equal to A module or device for determining a wear leveling threshold, wherein the wear leveling is executed when the write count is greater than or equal to the wear leveling threshold.
[0104] Optionally, in any one of the foregoing aspects, another implementation manner of the aspect provides that the request for accessing the instance is a write request including a nibble address, and the wear leveling policy indicates The memory size required to perform wear leveling is based on the nibble, and the nibble includes four bits of data. The method further includes an obtaining module or device for obtaining information related to the address in the write request The write count of the associated nibble; when the write count of the nibble is greater than or equal to the write count threshold, the write count of the codeword associated with the address is incremented.
[0105] Optionally, in any one of the foregoing aspects, another implementation manner of the aspect provides that the request for accessing the instance is a write request for an address including a codeword, and the wear leveling strategy indicates execution The memory size required for wear leveling is based on the codeword, and the system further includes an acquisition module or device for: acquiring the write count of the codeword associated with the address in the write request; when When the write count of the codeword is greater than or equal to the write count threshold, the write count of the sector associated with the address is incremented.
[0106] Optionally, in any of the foregoing aspects, another implementation manner of the aspect provides that the request for accessing the instance is a write request including the address of a sector, and the wear leveling policy instructs execution The memory size required for wear leveling is based on the sector, and the system further includes: an obtaining module or device for obtaining the write count of the sector associated with the address in the write request; An increment module or device for incrementing the write count of the example when the write count of the sector is greater than or equal to a write count threshold.
[0107] Optionally, in any one of the above aspects, another implementation of the aspect provides that the method further includes: an acquisition module or device, configured to acquire from a second application program executable in the storage device A cross-instance wear leveling strategy, wherein the cross-instance wear leveling strategy indicates whether to allow wear leveling between instances assigned to different applications; a cross-instance wear leveling module or device is used to connect the application to the first 2. When the cross-instance wear leveling policy of the application indicates that the wear leveling between storage units allocated to different applications is allowed, execute the instance of the application and the second instance allocated to the second application The cross-instance wear leveling between.
[0108] Optionally, in any of the foregoing aspects, another implementation of the aspect provides that the cross-instance wear leveling is performed in response to the write count of the address range being greater than or equal to a cross-instance wear leveling threshold .
[0109] Although several embodiments have been provided in the present disclosure, it should be understood that the disclosed system and method may be embodied in various other specific forms without departing from the spirit or scope of the present disclosure. The current examples should be considered illustrative rather than restrictive, and are not intended to be limited to the details given herein. For example, various elements or components may be combined or integrated in another system, or certain features may be omitted or not implemented.
[0110] In addition, without departing from the scope of the present disclosure, the technologies, systems, subsystems, and methods described and illustrated in the various embodiments as independent or separate may be combined or integrated with other systems, modules, technologies, or methods. Other items shown or discussed as being coupled or directly coupled or communicating with each other may be connected to each other or may be indirectly coupled or communicated through an interface, device, or intermediate component in an electrical, mechanical, or other manner. Other examples of changes, substitutions, and alterations can be determined by those skilled in the art, and can be exemplified without departing from the spirit and scope disclosed herein.
PUM


Description & Claims & Application Information
We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.