Multi-layer flash-memory device, a solid hard disk and a truncation non-volatile memory system

A non-volatile, memory technology, used in static memory, read-only memory, information storage, etc., can solve the problem of short service life of flash memory devices, and achieve the effect of prolonging the service life

Active Publication Date: 2010-06-09
INFOMICRO ELECTRONICS SHENZHEN
1 Cites 53 Cited by

AI-Extracted Technical Summary

Problems solved by technology

[0011] The object of the present invention is to provide a multi-layer flash memory devic...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Method used

[0084] The ordering of packets sent on the logical block address bus 38 is reordered from the ordering of packets on the host memory bus 18. Transaction manager 36 overlaps and interleaves transactions into different blocks of flash storage to increase data throughput. For example, packets of several incoming host transactions are stored in SDRAM buffer 60 via virtual memory bridge 42 or an associated buffer (not shown). The transaction manager 36 examines these buffered transactions and packets, reorders the packets and sends them from the logical block address bus 38 to a flash storage block in one of the raw NAND-type flash chips 68 downstream.
[0091] The buffer in the SDRAM 60 is coupled to the virtual buffer bridge 32 and can store data. SDRAM 60 is a synchronous dynamic random access memory on smart memory switch 30 . The SDRAM 60 buffer can be the storage space of the SDRAM storage module located in the mainboard of the host, because usually the storage capacity of the SDRAM module on the mainboard is much larger, and the cost of the intelligent storage switch 30 can be reduced. In addition, due to the larger capacity SDRAM and more powerful CPU usually located in the mainboard 10, the function of the smart storage switch 30 can be embedded in the mainboard, which further enhances the storage efficiency of the system.
[0095] The ordering of packets transmitted on the logical block address bus 28 is reordered from the ordering of packets on the host memory bus 18. Transaction manager 36 overlaps and interleaves transactions into different blocks of flash storage to increase data throughput. For example, packets of several incoming host transactions are stored in SDRAM buffer 60 via virtual buffer bridge 32 or an associated buffer (not shown). The transaction manager 36 checks these buffered transactions and packets, and after reordering these packets, sends them from the internal logical block address bus 38 to a flash storage block in one of the flash memory modules 73 downstream.
[0118] FIG. 4F shows another data partitioning arrangement, using two orthogonal dimension error correction values ​​of parity check and ECC error correction code, and having two different error detection/correction methods. For example, segment S1P utilizes one parity or ECC error correction code method, while segment S1P' utilizes another ECC error correction code method. A simple example is to have one dimension using Hamming codes, while the second dimension is Reed-Solomon or BCH encoding methods. Encoded with a higher dimensional error correcting code, when any single-chip flash memory device fails in operation, the probability of recovery is higher to protect data consistency. A near-failure flash device can be replaced before failure to prevent system failure.
[0120] As shown in Figures 4C-F, data can be stored in segments of flash endpoints, parity or ECC error correction code segments in several permutations, and passed through the flash storage segments in a linear fashion. Additionally, data can be arranged to provide redundant storage (as shown in Figure 4E), similar to a redundant array of independent disks (RAID) system, to increase system stability. Data is written to both fragments and can be read from either fragment.
[0122] FIG. 6 illustrates data partitioning tightly coupled to the required segment size of a flash memory device. Each channel of the flash memory module 73 of FIG. 2 and other figures has two flash chip packages, each package has two flash dies, and each flash die has two planes. Since a package has two dies, each die has two planes, using the two-plane type commands of the flash memory can improve the flash memory access speed. When each plane can store one page of data, the segment size can be s...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Abstract

The invention is suitable for memory field and provides a multi-layer flash-memory device, a solid hard disk and a truncation non-volatile memory system. The multi-layer flash-memory device comprises an unprocessed NAND flash-memory chip read by a non-volatile memory controller by a physical block address, wherein the non-volatile memory controller is arranged on a flash-memory module or on a system plate of the solid hard disk and changes the logic block address into physical block address and an intelligent memory office management device controls the data truncation and interlace between the channels of flash-memory modules at the high layer and the non-volatile memory controller controls the further interlace and remapping in the channels, therefore the life of the flash-memory device is prolonged.

Application Domain

Technology Topic

Image

  • Multi-layer flash-memory device, a solid hard disk and a truncation non-volatile memory system
  • Multi-layer flash-memory device, a solid hard disk and a truncation non-volatile memory system
  • Multi-layer flash-memory device, a solid hard disk and a truncation non-volatile memory system

Examples

  • Experimental program(1)

Example Embodiment

[0076] In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention.
[0077] Figure 1A A smart memory switch connected to a raw NAND-type flash memory device is shown. The smart storage switch 30 is connected to the host storage bus 18 through the upstream interface 34 . The smart memory switch 30 is also connected to the raw NAND-type flash memory chip 68 through a physical block address (PBA) bus 473 . After the transaction from the virtual memory bridge 42 on the logical block address bus (logical block address, LBA) 38 is demultiplexed by the multiplexer/demultiplexer 41, it is sent to an NVM controller 76, which The address is converted into a physical block address and sent to the raw NAND type flash memory chip 68 . Each NVM controller 76 may include one or more channels.
[0078] NVM controller 76 may act as a protocol bridge providing physical signals, such as transmitting and receiving differential signals on any of the differential data lines of logical block address bus 38, detecting or generating packet start or packet termination format, checking or generating checksums, and higher-level functions such as inserting or extracting device addresses, packet types, and commands. The host address of the host motherboard 10 includes the logical block address sent over the logical block address bus 38, although in some embodiments (eg, those implementing two-level read write leveling, bad block management, etc.), the logical block address may be Smart memory switch 30 is remapped.
[0079] The smart memory switch 30 may operate in a single-end mode. The smart storage switch 30 handles aggregate and virtual switches.
[0080] Internal processor bus 61 allows data flow to virtual storage processor 140 and SDRAM 60 . A buffer in SDRAM 60 coupled to virtual memory bridge 42 enables data to be stored. The SDRAM 60 buffer is the synchronous dynamic random access memory in the smart memory switch 30, or it can be the storage space of the SDRAM memory module on the mainboard 10 of the host, because usually the memory capacity of the SDRAM module on the mainboard is relatively large, and thus the smart memory can be reduced. Cost of switch 30. In addition, due to the higher performance of the CPU and the larger capacity of the SDRAM, which are often located in the host motherboard, the function of the smart storage switch 30 can be embedded in the host motherboard 10 to further enhance the storage efficiency of the system. FIFO 63 may be used in conjunction with SDRAM 60 to buffer transmit and receive packets from upstream interface 34 and virtual memory bridge 42 .
[0081] Virtual storage processor 140 provides remapping services to intelligent storage transaction manager 36 . For example, the logical address from the host can be looked up and converted into a logical block address that is sent to the NVM controller 76 over the logical block address bus 38 . Host data may be alternately distributed to NVM controller 76 in an interleaved fashion by virtual storage processor 140 or intelligent storage transaction manager 36 . The NVM controller 76 will then perform low-level interleaving among the raw NAND-type flash memory chips 68 within one or more channels. Interleaving can thus be done at two levels, a high-level interleaving between two or more NVM controllers 76 by the intelligent storage transaction manager 36, and the raw NAND-type flash memory chips 68 within each NVM controller 76 low-level interleaving.
[0082] The NVM controller 76 performs logical-to-physical remapping, as part of the function of the flash translation layer, to translate the logical block address received on the logical block address bus 38 to the actual non-volatile access to the raw NAND-type flash memory chip 68 The physical block address of the volatile memory block. NVM controller 76 may perform read and write leveling, bad block remapping, and other low-level management functions.
[0083] When operating in single-endpoint mode, intelligent storage transaction manager 36 not only utilizes virtual storage bridge 42 to buffer data, but also reorders packets of transactions from the host. A transaction may have several packets, such as an initial command packet to initiate a memory read, a data packet to return to the host from the storage device, and a handshake packet to end the transaction. It is not necessary to complete the task of all packets of the first transaction before the start of the next transaction, the storage switch 30 can reorder the packets of the next transaction and send them to the NVM controller 76 before the completion of the first transaction. This provides more time for the memory access of the next transaction. Thus, with grouped reordering, transactions will overlap.
[0084] The ordering of packets sent by the logical block address bus 38 is a reordering of the ordering of packets on the host storage bus 18 . Transaction manager 36 overlaps and interleaves transactions into different flash memory blocks to increase data throughput. For example, packets of several incoming host transactions are held in SDRAM buffer 60 via virtual memory bridge 42 or associated buffers (not shown). Transaction manager 36 examines the buffered transactions and packets and reorders the packets and sends them from logical block address bus 38 to the flash memory block within one of the raw NAND-type flash memory chips 68 downstream.
[0085] Figure 1B Shown is a host system that includes a flash module. The motherboard system controller 404 is connected to a central processing unit (CPU) 402 through a front side bus or other high-speed CPU bus. CPU 402 reads or writes to SDRAM buffer 410 controlled by volatile memory controller 408 . The SDRAM buffer 410 may contain a memory module of several DRAM chips.
[0086]Through motherboard system controller 404, using volatile memory controller 408 and non-volatile memory controller 406, data from flash memory may be transferred to SDRAM buffer 410. A direct memory access (DMA) controller or CPU 402 may be used to implement this transfer. The non-volatile memory controller 406 reads and writes to the flash memory module 414 or can access the logical block address NVM (LBA-NVM) device 412 controlled by the smart memory switch 430 .
[0087] LBA-NVM device 412 includes NVM controller 76 and raw NAND-type flash memory chips 68 . NVM controller 76 translates logical block addresses (LBA) to physical block addresses (PBA). The smart storage switch 30 sends the logical block address to the LBA-NVM device 412 , while the non-volatile memory controller 406 sends the physical block address to the flash module 414 via the physical bus 422 . A host system may have only one type of NVM subsystem, either a flash module 414 or an LBA-NVM device 412, although in some systems there may be both types.
[0088] Figure 1C Shows Figure 1B The flash modules 414 in the memory are arranged in parallel on a single segment of the physical bus 422. Figure 1D Shows Figure 1B The flash modules 414 in the device are serially arranged on multiple segments of the physical bus 422, forming a daisy chain.
[0089] figure 2 Shown is an intelligent storage switch that uses a flash memory module and an NVM controller contained on the module. The smart storage switch 30 is connected to the host system 11 on the host storage bus 18 through the upstream interface 34 . The smart storage switch 30 is also connected to downstream flash memory devices on the logical block address bus 28 through virtual storage bridges 42, 43.
[0090] The virtual memory bridges 42, 43 are protocol bridges that also provide physical signals such as transmit and receive differential signals on any of the differential data lines of the logical block address bus 28, detect or generate packet start or packet end formats, Checking or generating checksums, as well as higher layer functions such as inserting or extracting device addresses, packet types and commands, etc. The host address from the host system 11 includes the logical block address sent over the logical block address bus 28, although in some embodiments (eg, embodiments that perform two-level level read/write, bad block management, etc.), the logical block address may be Remapped by smart memory switch 30.
[0091] Buffers in SDRAM 60 are coupled to virtual buffer bridge 32 and are capable of storing data. SDRAM 60 is a synchronous dynamic random access memory on smart memory switch 30 . The SDRAM 60 buffer may be the storage space for an SDRAM memory module located within the host motherboard, since typically the memory capacity of the SDRAM module on the motherboard is much larger and the cost of the smart memory switch 30 can be reduced. In addition, due to the larger capacity SDRAM and more powerful CPU typically located in the host motherboard 10, the function of the smart storage switch 30 can be embedded in the host motherboard, which further enhances the storage efficiency of the system.
[0092] Virtual storage processor 140 provides remapping services to intelligent storage transaction manager 36 . For example, the logical address from the host can be looked up and converted into a logical block address that is sent to the flash memory module 73 over the logical block address bus 28 . Host data may be alternately allocated to flash modules 73 in an interleaved fashion by virtual storage processor 140 or intelligent storage transaction manager 36 . Subsequently, the NVM controller 76 within the flash memory modules 73 may perform low-level interleaving between the raw NAND-type flash memory chips 68 within each flash memory module 73 . Interleaving can thus be done on two levels, with high-level interleaving between two or more flash memory modules 73 by the intelligent memory transaction manager 36, and between the raw NAND-type flash memory chips 68 within each flash memory module 73. low-level interleaving.
[0093] The NVM controller 76 performs logical-to-physical remapping as part of the function of the flash translation layer, translating the logical block address received on the logical block address bus 28 into the actual non-volatile access to the raw NAND-type flash memory chip 68 The physical block address of the volatile memory block. The NVM controller 76 may perform read and write leveling, bad block remapping, and other low-level management functions.
[0094] When operating in single-endpoint mode, intelligent storage transaction manager 36 not only buffers data using virtual buffer bridge 32, but also reorders packets of transactions from the host. A transaction may have several packets, such as an initial command packet to initiate a memory read, a data packet to return to the host from the storage device, and an acknowledgment packet to end the transaction. It is not necessary to complete the task of all packets of the first transaction before the start of the next transaction, and the memory switch 30 can reorder the packets of the next transaction and send them to the flash module 73 before the completion of the first transaction. This provides more time for the memory access of the next transaction. Thus, with grouped reordering, transactions will overlap.
[0095] The ordering of packets sent by the logical block address bus 28 is a reordering of the ordering of packets on the host storage bus 18 . Transaction manager 36 overlaps and interleaves transactions into different flash memory blocks to increase data throughput. For example, packets of several incoming host transactions are held in SDRAM buffer 60 via virtual buffer bridge 32 or an associated buffer (not shown). Transaction manager 36 examines these buffered transactions and packets and reorders the packets and sends them from internal logical block address bus 38 to a flash memory block within one of the flash memory modules 73 downstream.
[0096] The packet used by virtual memory bridge 43 to start reading the second flash block may be reordered before the packet used by virtual memory bridge 42 to end reading another flash block, so that the second flash block is read Access starts earlier.
[0097] Clock pulse source 62 may generate a clock pulse for SDRAM 60 , smart memory transaction manager 36 , virtual memory processor 140 , and other logic blocks within smart memory switch 30 . Clock pulses from clock pulse source 62 can also be sent from smart memory switch 30 to flash memory module 73, which has an internal clock pulse source 46 that generates internal clock pulses CK_SR that can synchronize NVM controller 76 and flash memory Transfer between raw NAND-type flash memory chips 68 within module 73 . Accordingly, the transfer of physical blocks and physical block addresses (PBAs) is retimed from the time the logical block address (LBA) is transferred on the logical block address bus (LBA bus) 28 .
[0098] Figure 3A Physical block address (PBA) flash modules are shown. The flash memory module 110 includes a substrate, such as a multi-layer printed-circuit board (PCB), with surface mount raw NAND-type flash memory chips 68 mounted on the front side of the substrate, as shown, while more raw A NAND-type flash memory chip 68 is mounted on the backside of the substrate (not shown).
[0099] Metal contact points 112 are located on the front and rear surfaces of the substrate along the bottom edge. Metal contacts 112 mate with contacts on the module socket to electrically connect the module to the personal computer motherboard. Holes 116 may be provided on some module boards for proper insertion of the module into the slot position. The recesses 114 are also used for proper insertion and alignment of the modules. The recesses 114 can prevent the wrong type of module from being inserted. Capacitors or other discrete components are surface mounted on the substrate to filter interference signals from raw NAND-type flash memory chips 68, which are also mounted using surface mount technology (SMT).
[0100] Since the flash memory module 110 connects the raw NAND-type flash memory chip 68 with the metal contacts 112, the connection of the flash memory module 110 is realized by physical block addresses. Figure 1A Raw NAND-type flash memory chips in 68 can be used with Figure 3A Replace the flash module 110 in the .
[0101] Metal contacts 112 form the flash controller connections, such as Figure 1B In the non-volatile memory controller 406 . Metal contacts 112 may constitute Figure 1B part of the physical bus 422. Metal contacts 112 may constitute Figure 1A part of bus 473 in .
[0102] Figure 3B Logical block address (LBA) flash modules are shown. The flash memory module 73 includes a substrate, such as a multilayer printed circuit board (PCB), with surface mount raw NAND-type flash memory chips 68 and NVM controller 76 mounted on the front side of the substrate, as shown, while more raw A NAND-type flash memory chip 68 is mounted on the backside of the substrate (not shown).
[0103] Metal contacts 112' are located along the bottom edge on the front and rear surfaces of the substrate. Metal contacts 112' mate with contacts on the module socket to electrically connect the module to the personal computer motherboard. Holes 116 may be provided on some module boards for proper insertion of the module into the slot position. The recesses 114 also serve to allow the module to be inserted correctly. Capacitors or other discrete components, which are surface mounted on the substrate, are used to filter interference signals from raw NAND-type flash memory chips 68 .
[0104] Since the flash memory module 73 has the NVM controller 76 on its substrate, the raw NAND-type flash memory chips 68 are not directly connected to the metal contacts 112'. Instead, the raw NAND-type flash memory chip 68 is connected to the NVM controller 76 using wire traces, and the NVM controller 76 is then connected to the metal contacts 112'. The connection of the flash module 73 is achieved through a logical block address bus from the NVM controller 76, eg, as figure 2 The logical block address bus 28 is shown.
[0105] Figure 3C Shown is a solid-state drive (SSD) board that can be directly connected to a host computer. There is a connector 112" on the SSD board 440 that plugs into the host motherboard, eg Figure 1A The host memory bus 18 in the . Connector 112" may support SATA, PATA, PCI Express, or other buses. NVM controller 76 and raw NAND-type flash memory chips 68 are soldered to SSD board 440. There may also be other logic and buffers in chip 442. Chip 422 can also include Figure 1A Smart memory switch 30 in .
[0106] Alternatively, the connector 122" can be used as Figure 1B part of the physical bus 422. Or instead of using raw NAND type flash chips 68, logical block address NAND flash chips (LBA-NAND) are used to receive logical addresses from the NVM controller.
[0107] Figure 4A -F shows various arrangements of data stored in raw NAND-type flash memory chips 68. data from the host via Figure 9 The partition logic 518 of the NAND is partitioned into segments and stored in different flash memory modules 73, or in different raw NAND-type flash memory chips 68 as different endpoints within one flash memory module 73. The host operating system reads and writes data files by utilizing a cluster, eg, 4K bytes, as an address tracking mechanism. However, in real data transfer, it is based on sector (512 bytes) units. To perform two levels of data partitioning, the smart memory switch 30 solves this problem when it sends pages (for programming cells) and blocks (for erasing cells) to the physical block flash memory.
[0108] Figure 4A The operation process of N-way address interleaving is shown. The NVM controller sends host data to several channels or chips in parallel. For example, S11, S21, S31, SM1 can be data sent to an NVM controller or a channel. N-way interleaving can improve performance because the host can send commands to one channel without waiting for a response, the host can send more commands directly to the second channel, and so on.
[0109] exist Figure 4A , the data are arranged in a conventional linear arrangement. In this embodiment, the sequence of data received from the host is S11, S12, S13, ..., S1N, then S21, S22, S23, ..., S2N, with SMN as the last data. In an actual system, the logical block address does not have to start from S11. For example, S13 may be the first data item and the last data item may not be SMN. For example SM3 would be the last data item. Each token of an N-token data item is four times the size of a page of substantial flash storage holding the data, eg, 4x2K, 4x4K, 4x8K, and so on. Details of each token data item will be described further below. All M data items are stored, some of which are stored on different flash memory devices. When a failed operation occurs, eg, a flash chip fails to send data back, the entire data item of the flash chip is usually lost. However, other data items stored in other flash chips can also be read correctly.
[0110] exist Figure 4B , the data is divided and saved through N flash storage endpoints. Each data item is allocated and stored in N flash storage endpoints. For example, the first N token data item consists of S11, S12, S13, ..., S1N. The token S11 of this data item is stored in endpoint 1, the token S12 is stored in endpoint 2, ... until token S1N is stored in endpoint N. After the data item fills all the endpoints, the next round of filling starts. These data items will be divided into sectors or pages, or aligned to multiple sectors or pages.
[0111] Figure 4C It is another way of adding a specific channel or chip as a parity check or error correcting code (ECC), which is used to indirectly prevent the error of one of the N endpoints. Each time the host controller reads the result from all N+1 channels and compares it with the P parity value in the last channel to determine if the result is correct. The last channel can also be used to recover the correct value if an error correction code encoding technique is used, which can include Reed-Solomon or BCH encoding methods.
[0112] exist Figure 4C In , data is split across multiple storage endpoints with parity. The raw NAND-type flash memory chip is divided into N+1 endpoints. N+1 endpoints are of equal size, and the parity endpoint N+1 has a size large enough to store the parity or ECC error correction codes of the other N endpoints.
[0113] Each data item is divided into N parts, and each part is stored in different N endpoints. The parity or ECC error correction code of the data item is stored in the last parity endpoint N+1. For example, an N token data item includes tokens S11, S12, S13, ..., S1N. The token S11 of the data item is stored in endpoint 1, the token S12 is stored in endpoint 2, the token S13 is stored in endpoint 3, ..., and the token S1N is stored in the Nth endpoint. The parity or ECC error correction code is stored in the parity endpoint N+1 as token S1P.
[0114] In the graph, each data item is stored like a bar across all endpoints. If an error occurs in one of the endpoint devices, the data item is mostly intact, allowing recovery with parity or ECC endpoint flash devices.
[0115] Figure 4D Distributive one-touch-even permutations are shown that load parities into a diagonal permutation. S1P, S2P, S3P form a diagonal line across endpoints N+1, N, N-1, and parity is distributed along the diagonal to average the load and avoid situations that may occur such as Figure 4C Method for heavy reads and writes within one parity P channel.
[0116] Figure 4E Demonstrates a one-wich even parity using two endpoints. The content of both endpoints is exactly the same, so the data is saved redundantly. This is a very simple method, but wastes storage space.
[0117] Figure 4F and Figure 4D Similarly, parity is distributed across all endpoints, instead of being concentrated on one or two endpoints, to avoid heavy use of parity endpoints.
[0118] Figure 4F Another data partitioning arrangement is shown, utilizing two orthogonal dimension error correction values ​​for parity and ECC error correction codes, but with two different error detection/correction methods. For example, segment S1P utilizes one parity or ECC error correction code method, while segment S1P' utilizes another ECC error correction code method. A simple example is to have one dimension using Hamming code, while the second dimension is a Reed-Solomon or BCH encoding method. Encoded with higher dimensional error-correcting codes, when any single-chip flash memory device fails in operation, the probability of recovery is higher to preserve data consistency. A near-failed flash memory device can be replaced before it fails to prevent system failure.
[0119] Errors can be detected through two levels of error detection and correction. Each flash memory segment, including the parity segment, has a page-based ECC error correction code (page-based ECC). When a fragment page is read, bad bits can be detected and corrected according to ECC error correction codes (eg, Reed-Solomon coding). In addition, flash memory segments form a segment, and parity is set on one of the segments.
[0120] like Figure 4C As shown in -F, data can be stored on segments of flash endpoints, parity or ECC error correction code segments have several permutations, and the segments are stored in a linear fashion through the flash memory. Additionally, data can be arranged to provide redundant storage (eg Figure 4E shown), which is similar to a redundant array of independent disks (RAID) system to improve system stability. Data is written to both fragments and can be read from either fragment.
[0121] Figure 5 Multiple channels of dual-die and dual-plane flash memory devices are shown. The multi-channel NVM controller 176 can drive 8 flash memory channels and can be part of the smart memory switch 30, such as Figure 1A shown. Each channel has a pair of flash multi-die packages 166, 167, each with a first die 160 and a second die 161, and each die has two planes. Thus, each channel can write to eight planes or pages simultaneously. Data is divided into eight-page segments and matched to the number of pages that can be written per channel. A pipeline register 169 in the multi-channel NVM controller 176 can buffer data to each channel.
[0122] Image 6 Data partitioning of the segment size required to be tightly coupled to a flash memory device is demonstrated. figure 2 Each channel of the flash memory module 73 of the drawings and other figures has two flash chip packages, each package has two flash dies, and each flash die has two planes. Since a package has two dies, each with two planes, using flash's two-plane command can improve flash access speeds. When each plane can store one page of data, the segment size can be set to eight pages. Thus, one segment is written to each channel, and each channel has one flash memory module 73 with two dies as raw NAND-type flash memory chips 68 .
[0123] The segment depth is the number of channels times the segment size, or in this example N by 8 pages. In an 8-channel system with four dies per channel and two planes per die, the 8-channel system has 8 by 8 or 64 pages of data as the segment depth set by the smart memory switch 30 . When the number of dies or planes increases, or the size of a page changes, the data partitioning method can change according to the physical flash memory structure. Segment size can vary with flash page size for maximum efficiency. The purpose of page alignment is to avoid size mismatches between partial and central pages, thereby increasing access speed and improving read and write levels.
[0124] When performing flash transaction layer functions, the NVM controller 76 receives the logical sector address (LSA) from the smart storage switch 30 and translates the logical sector address to a physical address within the multi-plane flash memory.
[0125] Figure 7 An initialization flow chart for each NVM controller 76 that employs data partitioning. When NVM controller 76 controls multiple dies of raw NAND-type flash memory chips 68 with multiple planes per die per channel, such as Figure 5-6 As shown, each NVM controller 76 performs this startup procedure when power is applied or a configuration change occurs during manufacturing.
[0126] Each NVM controller 76 receives a special command from an intelligent storage switch, step 190, which causes the NVM controller 76 to scan for bad blocks and determine the physical capacity of the flash memory controlled by the NVM controller.
[0127] Determine the maximum effective size of all flash blocks within all dies controlled by the NVM controller, step 192, as well as the minimum size of spare blocks and other system resources. The discovery of any bad blocks will reduce the maximum effective capacity. These values ​​are reserved for special commands in the manufacturing process and are programmable, but cannot be changed by the user.
[0128] The mapping from logical block addresses to physical block addresses is set in the mapper or mapping table for this NVM controller 76 , step 194 . Bad blocks are ignored, and some empty blocks are reserved for later swapping with bad blocks found in the future. The configuration information is stored in a configuration register in the NVM controller 76, step 196, and can be read by an intelligent storage switch.
[0129] Figure 8 This is the initialization flow chart of the smart memory switch when using data partitioning. While each NVM controller 76 controls multiple dies of raw NAND-type flash memory chips 68, there are multiple planes per die per channel, eg Figures 5 to 6 As shown, the smart memory switch performs this initialization procedure when power is applied during system manufacture or when the configuration is changed.
[0130] By reading raw flash blocks in raw NAND-type flash chips 68, the smart storage switch counts all NVM controllers 76, step 186. The bad block ratio, size, die stack per device, and number of planes per die can be obtained. The smart memory switch sends special commands to each NVM controller 76, step 188, and reads the configuration registers on each NVM controller 76, step 190.
[0131] For each NVM controller 76 enumerated at step 186, the number P per die plane, the number D per flash chip die, the number F of flash chips per NVM controller 76 are obtained, step 180. The number C of channels is also obtained, which may be equal to the number of NVM controllers 76 or the number C of channels is a multiple of the number of NVM controllers 76 .
[0132]Segment size is set to N*F*D*P pages, step 182 . Segment depth is set to C*N*F*D*P pages, step 184 . This information is stored in the NVM's configuration space, step 176 .
[0133] Figure 9 The four-channel smart storage switch is shown, along with more details on the smart storage transaction manager. The virtual storage processor 140, the virtual buffer bridge 32 connected to the SDRAM buffer 60, and the upstream interface 34 connected to the host are all connected to the intelligent storage transaction manager 36 and operate in the manner previously described.
[0134] The four channels connected to the four flash modules 950-953 are provided by four virtual memory bridges 42, where each channel is a figure 2 The flash module 73 shown in -3, the four virtual storage bridges 42 are connected to the routing logic 534 of the multi-channel interleaved storage in the intelligent storage transaction manager 36. Host data may be interleaved between the four lanes and the four flash memory modules 950-953 by routing logic 534 to improve performance.
[0135] Host data from upstream interface 34 is reordered by reordering unit 516 within intelligent storage transaction manager 36 . For example, a host's packets may be processed out of the order in which they were received. This is a very high-level reordering.
[0136] Split logic 518 may split host data into slices for writing to different physical devices, such as redundant array of inexpensive disks (RAID). Error correction code logic 520 may add and check parity and ECC error correction code data, while SLV installer 521 may mount a new storage logical volume (SLV) or restore the original SLV. SLV logical volumes can be allocated to different physical flash devices, as shown in this figure, flash modules 950-953, numbered SLV #1, #2, #3, and #4, respectively.
[0137] Virtualization unit 514 virtualizes host logical addresses and concatenates the flash memory in flash memory modules 950-953 into a single unit for efficient data processing, such as through remapping and error handling. The remapping may be performed at a high level by the intelligent storage transaction manager 36 with the read write level and bad block monitor 526 (monitoring the read and write and bad block levels of each device in the flash modules 950-953). This high-level or presidential-level average read and write can instruct new blocks to the least read and write flash modules 950-953, such as flash module 952, which has 250 reads and writes, compared to 500, 400, and 500 reads and writes on other flash modules. 300 less. Then, the flash module 952 can be in the raw flash chip 68 in the flash module 952 ( figure 2 ) to perform more low-level or manager-level average reads and writes.
[0138] Therefore, high-level read and write leveling determines the least read-write logical volume or flash memory module, while the selected device performs low-level read-write leveling on the flash memory blocks within the selected flash memory module. Using the high and low levels of average read and write, the overall read and write can be improved and optimized.
[0139] Endpoint and hub mode logic 528 causes smart storage transaction manager 36 to perform endpoint aggregation for switch mode. Rather than using read and write pointers, the intelligent storage transaction manager 36 may use the bad block ratio to decide which of the flash modules 950-953 to allocate a new block to. Channels or flash modules with a high percentage of bad blocks can be skipped. A small amount of host data that does not need to be interleaved can be stored using the flash modules with fewer reads and writes, while larger amounts of host data can be interleaved across all four flash modules (including the modules with more reads and writes). Reads and writes are still reduced, and interleaving is still used to improve the performance of larger multi-block data transfers.
[0140] Figure 10 is the flow chart of the truncation method. The flash memory size or capacity in each channel can vary. Even with the same size of flash installed in each channel, as flash blocks wear out and go bad, the effective capacity in the channel is reduced and made different.
[0141] Figure 9 Four channels are shown with capacities of 2007, 2027.5, 1996.75, and 2011 MB (megabytes) in flash modules 950-953, respectively. Figure 10 The truncation method finds the minimum capacity and truncates all other channels into this minimum capacity. After truncation, all channels have the same capacity, which facilitates data segmentation, as shown in Figure 4.
[0142] The size or capacity of all flash volumes of the flash module is read, step 202 . Determine the truncation interval size, step 204 . The interval size can be an integer, such as 1MB, and can be set by the system, or can be changed.
[0143] Find the smallest volume capacity from all the flash volume capacity sizes read in step 202 , step 206 . This minimum volume capacity is divided by the interval size in step 208. When the remainder is zero, step 210, the truncated volume capacity is set equal to the minimum volume capacity, step 212. Since the minimum volume capacity is a multiple of the interval size, no rounding is required.
[0144] When the remainder is not zero, step 210, the truncated volume capacity is set equal to the minimum volume capacity minus the remainder, step 214. Because the minimum volume capacity is not a multiple of the interval size, it needs to be rounded up.
[0145] The entire storage capacity is then set to the truncated volume capacity multiplied by the number of flash volumes, step 216 .
[0146] Figure 11 The Q-R pointer table and command queue in the SDRAM buffer are shown. SDRAM 60 stores sector data from the host as sector data buffer 234, which is the data to be written to the flash memory module. When a read hits sector data buffer 234 within SDRAM 60, the host's read may be provided from sector data buffer 234 rather than from the slower flash memory.
[0147] Q-R pointer table 232 contains entries that point to sectors within sector data buffer 234 . The logical address from the host is divided by the size of the sector data buffer 234 (that is, the number of sectors that can be stored), which yields the quotient Q and the remainder R. The remainder R selects a location in the sector data buffer 234, while the quotient Q can be used to check whether the sector buffer 234 is hit. The Q-R pointer table 232 stores the quotient Q, the remainder R, and the data type DT. The data type shows the status of the data in SDRAM 60 . Data type 01 shows that the data in SDRAM 60 needs to be copied to flash memory on-the-fly. Data type 10 shows that the data is only valid within SDRAM 60, but has not been copied to flash. Data type 11 shows that the data is valid in SDRAM 60 and has been copied to flash, so flash is also valid. Data type 00 shows that the data is invalid in SDRAM 60.
[0148] type of data:
[0149] 0, 0 - position is empty;
[0150] 1, 0 - data needs to be output copied to flash, but can be background processing, not immediate emergency;
[0151] 0, 1 - data is being written to flash memory and needs immediate processing;
[0152] 1, 1 - Data has been written to flash. The remainder in SDRAM can be used for immediate reading or writing by new data;
[0153] Commands from the host are stored in command queue 230 . The command entry of the command queue 230 stores the host logical block address LBA, the transfer length (eg, the number of sectors to be transferred), the quotient Q and the remainder R, indicating that the data transfer has crossed the boundary or end of the sector data buffer 234 and covered sectors Cross boundary (X-BDRY) flag, read and write flag, and data type of the beginning of the zone data buffer 234. Other data may also be stored, such as the bit offset of the first sector of the logical block address to be accessed. The start and end logical addresses can be stored instead of the transfer length.
[0154] Figure 12 Flowchart of the host interface for the sector data buffer in SDRAM. When the smart storage switch receives a command from the host, the host command includes a logical address, such as a logical block address (LBA), the logical block address is divided by the overall size of the sector data buffer 234 to obtain the quotient Q and the remainder R, step 342 . The remainder R points to a location in the sector data buffer 234 to read this location, step 344 . When the data type of location R is null (00) or read cache (11), the location R can be rewritten, because the null data type 00 can be rewritten by new data that does not have to be copied to flash memory on the fly, and the read The sector data of cache type 11 has been output copied to flash memory, so new data can be rewritten. The new data from the host is overwritten at location R of the sector data buffer 234, and the entry for that location R in the Q-R pointer table 232 is updated with the new Q, step 352. The new data type is set to type 10 to indicate that the data must be copied to flash, but not be processed immediately.
[0155] Decrease the length LEN, step 354, and when LEN becomes zero, the host transfer ends, step 356. Otherwise, the logical block address (LBA) sector address is incremented, step 358, and processing returns to step 342 to continue.
[0156] When the read at location R has data type 01 or 10 in step 344, the data in location R of SDRAM 60 is dirty data in step 346 and cannot be rewritten until the output is copied to flash unless the host rewrites to full same address (write hit). When the quotient Q from the host address matches the stored Q, a write hit occurs, step 348. New data from the host may overwrite the old data in sector data buffer 234, step 352. The new data type is set to type 10.
[0157] When the quotient Q does not match, step 348, the host then writes to a different address. The old data of the sector data buffer 234 must be copied to flash memory immediately. The data type is first set to type 01. Subsequently, the old data is written to the flash memory, or to a cache (eg, a FIFO connected to the flash memory), step 350 . When old data has been copied to flash memory, the data type can be set to read cache type 11. The program then returns to step 344, and step 346 will be correct, leading to step 352, where the host data will overwrite the old data that has been copied to the flash memory.
[0158] Figure 13A -C is the flow chart of the operation of the command queue manager. Command Queue Manager Control Figure 11 The command queue 230. When the command from the host is a read, step 432, and the logical block address (LBA) from the host hits the command queue, ie when the logical block address (LBA) falls within the length LEN counted from a logical block address in the command queue When within range, step 436, the data is read from the sector data buffer, step 442, and sent to the host, flash reads have been avoided by cache reads. The length LEN is decreased, step 444, and the command queue is updated if needed (marked), step 446. When the length becomes zero, step 448, the order of entries in the command queue may be re-prioritized before the operation completes, step 450. When the length is not zero, the procedure repeats from step 432 for the next data in the host transfer.
[0159] When the logical block address of the host read command does not hit the command queue, step 436, and the quotient Q is matched in the Q-R pointer table 232, step 438, ie, although there is no entry in the command queue 230, there is a match in the sector data buffer 234 entrance. When the data type is read buffer type, step 440, the data can be read from the sector data buffer 234 and sent to the host, step 442. The procedure then continues as previously described.
[0160] When the data type is not read cache type, step 440, the program starts from Figure 13BThe A in continues. Flash memory is read, loaded into SDRAM, and sent to host, step 458. Q, R and data types are updated in the Q-R pointer table 232, step 460, and the program starts from Figure 13A The E in continues to step 444.
[0161] When the quotient Q does not match in the Q-R pointer table 232, step 438, indicating that there is no matching entry in the sector data buffer 234, the program starts from Figure 13B The B in the continuation. exist Figure 13B , when the data type is write cache (10 or 01) type, step 452, old data is output from sector data buffer 234 and written to flash memory for necessary backup, step 454. Then, after the data output is copied to flash, the clear flag is set. Once the old data has been copied to the buffer for writing to flash, the data type in the Q-R pointer table 232 may be set to read cache type 11, step 456. The flash memory is read upon request and loaded into the SDRAM to replace the old data and sent to the host, step 458 . Update Q, R and data type in Q-R pointer table 232 to read cache type 11, step 460, the program starts from Figure 13A The E in continues to step 444.
[0162] When the data type is not the write cache type recorded in the SDRAM (but is of type 00 or 11), step 452, the flash memory is read upon request, loaded into the SDRAM, and sent to the host, step 458. Update Q, R and data type in Q-R pointer table 232 to read cache type 11, step 460, the program starts from Figure 13A The E in continues to step 444.
[0163] exist Figure 13A , when the host command is a write command, step 432, and the logical block address (LBA) from the host hits the command queue, step 434, the program starts from Figure 13C The D continues. The command queue has not changed, step 474. The write data from the host is written to sector data buffer 234, step 466. Update Q, R and data type in Q-R pointer table 232, step 472, the program starts from Figure 13A The E in continues to step 444.
[0164] exist Figure 13A , when the host command is a write command, step 432, and the logical block address (LBA) from the host does not hit the command queue, step 434, the program starts from Figure 13C The C in continues. When the quotient Q matches in the Q-R pointer table 232, step 462, it indicates that there is a matching entry in the sector data buffer 234. A new resident flag is set, step 464, indicating that the entry does not overlap another entry in the command queue. The write data from the host is written to sector data buffer 234, step 466. Update Q, R and data type in Q-R pointer table 232 to type 01, step 472, the program starts from Figure 13A The E in continues to step 444.
[0165] When the quotient Q does not match in the Q-R pointer table 232, step 462, indicates that there is no matching entry in the sector data buffer 234. Old data is output from sector data buffer 234 and written to flash memory, step 468 . Set the clear flag, for example by setting the data type to read cache type 11. The clear flag indicates that the data has been sent to flash and can be safely rewritten. Once the old data has been copied to the buffer for writing to flash, the data type in the Q-R pointer table 232 may be set to read cache type 11, step 470 . The write data from the host is written to sector data buffer 234, step 466. Update Q, R and data type in Q-R pointer table 232, step 472, the program starts from Figure 13A The E in continues to step 444.
[0166] exist Figure 13A , when the host command is a write command, step 432, and the logical block address (LBA) from the host hits the command queue, step 434, the program starts from Figure 13C The D in continues. In step 474, nothing is done to the command queue, and the writing of data from the host to the sector data buffer 234 continues, step 466. Update Q, R and data type to type 10 in Q-R pointer table 232, step 472, the program starts from Figure 13A The E in continues to step 444.
[0167] Figure 14 Page alignment in SDRAM and flash is highlighted. Each page may have data of several sectors, for example, each page has 8 sectors in this embodiment. A host transfers 13 sectors without page alignment. The first four sectors 0, 1, 2, 3 are stored in page 1 of the SDRAM 60 sector data buffer 234, while the next eight sectors 4 to 11 are stored in page 2, and the last sector 12 is stored in page 2 3.
[0168] When the data within the sector data buffer 234 is output copied to the flash memory, the data from this transfer is stored in three physical pages of the flash memory. The three pages need not be consecutive page numbers, but may be on different raw NAND flash chips 68 . The logical block address, sequence number (SEQ#), and sector valid bit are also stored for each physical page of flash memory. The eight sector valid bits in physical page 101 are all set to 1 because all eight sectors are valid. The last four sector valid bits in physical page 100 are all set to 1 because valid data is stored in the last four sectors of the page. These sectors are sectors 0, 1, 2, and 3 transmitted by the host. The physical page 102 receives the last sector 12 transmitted from the host, and stores the sector in the first sector in the physical page 102 and sets the sector valid bit to 1. The valid bits of the other seven sectors are all set to 0, and the data sectors of these seven sectors remain unchanged.
[0169] Figure 15 Merging of unaligned data is highlighted. Physical pages 100, 101, 102 have been written as per Figure 14 description of. The new host data is written to pages 1 and 2 of the SDRAM buffer and matches the Q and R of the old data stored in physical page 101.
[0170] Sectors within page 1 with data A, B, C, D, E are written to a new physical page 103. Because of this new transfer, the sequence number (SEQ#) of the physical page 103 is incremented by 1.
[0171] The old physical page 101 becomes invalid and its sector data 6, 7, 8, 9, 10, 11 are copied to the new physical page 200. Host data F, G from SDRAM 60 are written to the first two sectors of the physical page 200 to merge the data. The old data 4, 5 are replaced by the new data F, G. SEQ# is used to distinguish which version is new, in this case physical pages 101 and 200 have the same logical block address number, as in Figure 15 shown. The firmware will check its SEQ# to determine which page (physical page 200) is valid.
[0172] Figure 16A -K is an embodiment using SDRAM buffers and command queues in flash memory systems. The SDRAM 60 has a sector data buffer 234 with 16 locations for sector data for ease of illustration. In this embodiment, each location holds one sector, but other page-based embodiments may store multiple sectors per page location. These locations in SDRAM 60 may be labeled 0-15. Since there are 16 locations in SDRAM 60 , the logical block address (LBA) is divided by 16, and the remainder R selects one of the 16 locations in SDRAM 60 .
[0173] exist Figure 16A , after initialization, the command queue 230 is empty. No host sector data is stored in SDRAM 60 . exist Figure 16B , the host writes LBA=1 in C0, and the length LEN is 3. The entry is loaded into the command queue 230 for writing C0, the logical block address LBA is set to 1, and the length LEN is set to 3. Since LBA is divided by 16, the quotient Q is 0, and the remainder R is 1, so 0, 1 are stored as Q, R. The data type DT is set to 10, showing dirty and not yet output copied to flash. Data C0 is written to SDRAM 60 at locations 1, 2, and 3. The three sectors 1, 2, 3 of the Q-R pointer table 232 point to the corresponding sector data buffers 234, and the Q, R, DT of the first sector are (0, 1, 10), and the second sector is ( 0, 2, 10), the last sector is (0, 3, 10). The written data value C0 can have any value, and each sector can have a different value. In this embodiment, C0 simply identifies the write command.
[0174] exist Figure 16C , the host writes LBA=5 in C1, and the length LEN is 1. Another entry is loaded in command queue 230 to write C1 with logical block address LBA set to 5 and length LEN set to 1. Since the quotient Q obtained by dividing the logical block address LBA by 16 is 0 and the remainder R is 5, 0 and 5 are stored as Q and R. The data type DT is set to 10, showing dirty and not yet output copied to flash. Data C1 is written into SDRAM 60 at location 5 of sector data 234 . Sector 5 of the Q-R pointer table 232 is filled with (0, 5, 10).
[0175] exist Figure 16D , the host writes LBA=14 in C2, and the length LEN is 4. The command queue 230 is loaded with a third entry to write to C2, the logical block address LBA is set to 14, and the length is set to 4. Since the quotient Q of the logical block address LBA divided by 16 is 0, and the remainder R is 14, 0 and 14 are stored as Q, R. The data type DT is set to 10, showing dirty and not yet output copied to flash.
[0176] Since a length LEN of 4 is written to sectors 14, 15, 0, 1, which overwrites sector 0 across sector 15, the crossover flag X for this entry is set to 1. Since sector 1 has been written to C0 before, and C0 has not been written to flash, the old C0 data in sector 1 must be copied to flash immediately. The data type of the first entry is changed to 01, which indicates that an immediate write to flash is required. This data type takes precedence over other data types, so output copying to flash can happen more quickly than other requests. After the output is copied to flash, the four sectors 14, 15, 0, 1 of the Q-R pointer table 232 are filled with (0, 14, 10), (0, 15, 10), (1, 0, 10) and ( 1, 1, 10).
[0177] exist Figure 16E , the output of the old C0 data of sector 1 has been completed. The first entry of command queue 230 is updated to account for sector 1 being output. The logical block address LBA is changed from 1 to 2, the remainder R is changed from 1 to 2, and the length is reduced from 3 to 2. Therefore, the first entry of the command queue 230 now covers two sectors of the old write C0 instead of three. The data type is changed to read cache type 11 because the other sectors 2, 3 are also copied to flash along with sector 1.
[0178] At this time, the old C0 data has been output, and the C2 write data from the host is written into the sectors 14, 15, 0, 1 of the sector data 234 of the SDRAM 60, such as Figure 16E shown.
[0179] exist Figure 16F , the host writes LBA=21 in C3, and the length is 3 sectors. A fourth entry is loaded into the command queue 230 to write C3, the logical block address LBA is set to 21, and the length LEN is set to 3. Since the quotient Q of the logical block address LBA divided by 16 is 1, and the remainder R is 5, 1, 5 are stored as Q, R. The data type DT is set to 10 because the new C3 data will be dirty and not yet copied out to flash.
[0180] New data C3 will be written to sectors 5, 6, 7 in SDRAM 60. These sectors are empty except for sector 5, which has old C1 data that must be output to flash. The data type of the sector 5 entry C1 of the command queue 230 is changed to 01 to request an immediate write to flash memory. exist Figure 16G, once this output is complete, the data type is changed to read cache type 11 to show that the old C1 data has been copied to flash. The old C1 data still exists in sector 5 of sector data 234 of SDRAM 60 .
[0181] exist Figure 16H , new C3 data is written to sectors 5, 6, 7 of sector data 234 in SDRAM 60. The old C1 data in sector 5 is overwritten, so the data type of its entry C1 in the command queue 230 is changed to null 00. The old C1 entry can be cleared and then overwritten by the new host command. Sectors 5, 6, 7 of the Q-R pointer table 232 are filled with (1, 5, 10), (1, 6, 10) and (1, 7, 10).
[0182] exist Figure 16I , the host reads LBA=18 of R4, and the length LEN is 3 sectors. The logical block address LBA is divided by 16 to obtain a quotient Q of 1 and a remainder R of 2. A new entry is loaded in the command queue 230 to read R4, the data type is read cache type 11, since the new clean data will be read from the flash memory into the sector data 234 of the SDRAM 60.
[0183] Position R=2 has the same quotient Q of 1, and the data type is read cache type 11, indicating that sector data is available. Since positions R=2 and 3 are already loaded with C0, the first entry C0 in the command queue 230 shows that the quotient Q is 0, while the new quotient Q is 1, and the two Qs do not match. The host cannot read the old C0 data in the sector data 234 of the SDRAM 60 . Instead, the old C0 data should be output to flash. However, because the data type is already read cache type 11, it shows that the C0 data is already in the Figure 16D output, so no further output is required and can be overwritten. The old entry of C0 is discarded as invalid, and the new data R4 data is read from flash memory and written to sectors 2, 3, 4 of SDRAM 60, as in Figure 16J shown.
[0184] exist Figure 16K , new data R4 is read from sectors 2, 3, and 4 in the sector data 234 of the SDRAM 60 and sent to the host. Sectors 2, 3, 4 of the Q-R pointer table 232 are filled with (1, 2, 11), (1, 3, 11) and (1, 4, 11). Sector 0 and sector 1 remain unchanged.
[0185] Alternative Embodiment
[0186] Other embodiments are contemplated. E.g, Figure 1A and others can have many variants. ROM such as EEPROM may be connected to or belong to virtual storage processor 140, or another virtual storage bridge 42 and NVM controller 76 may connect virtual storage processor 140 to another raw NAND flash chip 68 for use in The firmware of the virtual storage processor 140 is stored. The firmware can also be stored on the main flash module.
[0187] Flash memory can be embedded on the motherboard or SSD board or can be on a separate module. Capacitors, buffers, resistors, or other components can be added. The smart memory switch 30 may be integrated on the main board or on a separate board or module. NVM controller 76 may be integrated with smart memory switch 30 or raw NAND flash memory chips 68 as a single chip device or plug-in module or board.
[0188] With the controller's president-manager two-tier configuration, the controller of the smart storage switch 30 can be simpler than the controller required for a single-tier control of read leveling, bad block management, remapping, caching, power management, and the like. Presidential-level functions in smart memory switch 30 can be simplified because lower-level functions are performed by NVM controller 76 as supervisory-level functions between raw NAND flash chips 68 in each flash module 73 . Less expensive hardware can be used within the smart storage switch 30, such as utilizing an 8051 processor for the virtual storage processor 140 or the smart storage transaction manager 36, rather than more expensive processor cores such as the Advanced RISC Machine ARM -9 CPU cores.
[0189] Flash blocks of different numbers and settings can be connected to the Smart Storage Switch. Instead of using the logical block address bus 28 or the differential serial packet bus (differential serial packet bus), other serial buses such as synchronous double data rate (DDR), differential serial packet data bus ( differential serial packetdata bus), legacy flash interface, etc.
[0190] The mode logic can detect the state of the pin only at power-up, rather than detecting the state of a dedicated pin. A mode change can be initiated with some combination or series of pin states, or an internal register such as a configuration register can set the mode. Multibus protocol chips may have additional personalization pins to select which serial bus interface to use, or may have programmable registers to set the mode to hub mode or switch mode.
[0191] The transaction manager and its controllers and functions can be implemented in a variety of ways. Functions may be programmed and performed by a CPU or other processor, or may be implemented in dedicated hardware, firmware, or some combination. Many partitions of functionality can be replaced.
[0192] By using parity/error correction code (ECC) for multiple NVM controllers 76 and distributing data segments into multiple non-volatile memory blocks, the reliability of the overall system is greatly improved. Nonetheless, a CPU engine with DDR/SDRAM cache may still be required to achieve the computational power required for complex error correction/parity computation and generation. Another advantage is that data can be recovered even if a flash block or flash module is damaged, or a smart storage switch can initiate a "Fault Recovery" or "Auto-Rebuild" process to insert new flash memory modules, and recover or rebuild lost or corrupted data. The overall system fault tolerance can be greatly improved.
[0193] Wider or narrower data busses and flash memory chips, such as those with 16- or 32-bit data channels, can be used instead. Alternate bus architectures, such as a bus with nesting or segmentation, can be used inside or outside the smart memory switch. Smart memory switches can use two or more internal buses to increase data throughput. More complex switch structures can replace internal or external buses.
[0194] Data segmentation can be done in various ways, as can parity and error correction codes (ECC). The reordering of packets can be adjusted according to the data arrangement to prevent reordering of overlapping storage locations. Smart switches can be integrated with other components, or as stand-alone chips.
[0195] Additional pipeline or temporary buffers and FIFO data buffers can be added. For example, the host FIFO in intelligent storage switch 30 may be part of intelligent storage transaction manager 36 or may be stored in SDRAM 60 . Independent page buffers can be set up within each channel. When the raw NAND flash chip 68 of the flash module 73 has an asynchronous interface, figure 2 The external clock input CLK_SRC of the flash memory module 73 becomes unnecessary.
[0196] A single package, a single chip, or a multi-chip package may contain one or more flash memory channels and/or smart memory switches.
[0197] A multi-level memory cell (MLC) based flash module 73 can have four MLC flash chips (flash chips with two parallel data channels), but other flash modules 73 can be formed in different combinations, such as four, Eight or more data lanes, or eight, sixteen or more MLC flash chips. Flash modules and channels can take the form of chains, branches or arrays. For example, a branch containing 4 flash memory modules 73 can be linked to the smart storage switch 30 as a chain. Different accesses to memory can be made with other size aggregation or partitioning schemes. Flash memory, nitride film well (SONOS) flash memory, phase change memory (PCM), ferroelectric random access memory (FRAM), magnetoresistive random access memory (MRAM), memristor, phase change random access memory ( PRAM), Resistive Random Access Memory (RRAM), Racetrack Memory and Nano Random Access Memory (NRAM).
[0198] The host may be a personal computer (PC) motherboard or other personal computer platform, a mobile communication device, a personal digital assistant (PDA), a digital camera, an associated device, or other devices. The host bus or host-device interface can be SATA, PCIE, SD, USB or other host bus, and the internal bus of the flash memory module 73 can be PATA, multi-channel SSD using multiple SD/MMC, compact flash card (compact flash) , CF), USB, or other parallel interface. The flash memory module 73 may be a standard printed circuit board (PCB), or may be packaged in a TSOP, BGA, LGA, COB, PIP, SIP, CSP, POP, or multi-chip package (MCP) package, and may include The processed NAND-type flash memory chips 68 or the unprocessed NAND-type flash memory chips 68 may be individual flash memory chips. The internal bus may be shared in whole or in part or may be a separate bus. SSD systems can employ boards with other components such as LED indicators, capacitors, resistors, etc.
[0199] Directional terms such as above, below, up, down, top, bottom, etc. are relative and vary as the system or data is rotated, flipped, etc. These terms are used to describe the apparatus and are not meant to be limiting.
[0200] The flash module 73 may have a single-chip package containing the packaged controller and flash die, which may be integrated on a PCBA, or directly on the motherboard to further simplify assembly, reduce manufacturing costs, and reduce overall thickness. The flash chip can also be used with other embodiments that include open frame flash cards.
[0201] Rather than just using the smart memory switch 30 for flash memory, other features can be added. For example, a music player may contain a controller that plays audio from MP3 data stored in flash memory. Units can add audio jacks to allow users to plug in headphones to listen to music. Units can add a wireless transmitter (such as a Bluetooth transmitter) to connect to wireless headphones instead of using the audio jack. Infrared transmitters (eg IrDA) can also be added. It is also possible to add a Bluetooth transceiver that communicates with wireless mice, PDAs, keyboards, printers, digital cameras, MP3 players, or other wireless devices. The Bluetooth transceiver can replace the connector and become the main connector. Bluetooth adapters can have connectors, radio frequency (RF) transceivers, baseband controllers, antennas, flash memory (EEPROM), voltage regulators, crystals, light emitting diodes (LEDs), resistors, capacitors, and inductors, among others. These components can be mounted to a PCB and then encased in plastic or metal enclosures.
[0202] The Background of the Invention section may contain background information about the problem or environment of the invention, rather than introducing the prior art of others. Accordingly, the inclusion of information in the Background section does not constitute an admission of prior art by the patent applicant.
[0203] Any method or process described herein is machine-implemented or computer-implemented, desirably performed by a machine, computer, or other device, rather than manually without the assistance of such a machine. The tangible results produced may include machine-generated reports or other content displayed on display devices (eg, computer monitors, projection devices, audio-generating devices, and related media devices), and may also include machine-generated printouts. Computer control of other machines is another tangible result.
[0204]Any advantages and benefits described may not apply to all embodiments of the invention. If the word "member" is recited in a claim, it indicates that the patent applicant wants the claim to be classified at 35 USC Sect. 112, paragraph 6. The word "component" is usually preceded by a label containing one or more words. The word preceding the word "member" is a label intended to facilitate claim reference, not intended to express structural limitations. Such means-plus-function claims cover not only the structures described herein as performing the function and their structural equivalents, but also equivalent structures. For example, although nails and screws have different structures, they are equivalent structures because they both perform a fastening function. Claims that do not use the word "member" are not intended to be classified at 35 USC Sect. 112, paragraph 6. The signal is usually an electronic signal, but it can also be an optical signal, such as a signal that can be carried over fiber optic lines.
[0205] The above descriptions are only preferred embodiments of the present invention and are not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present invention shall be included in the protection of the present invention. within the range.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Similar technology patents

Classification and recommendation of technical efficacy words

People also interested in

Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products