Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

11314results about "Redundant data error correction" patented technology

Multi-dimensional data protection and mirroring method for micro level data

The invention discloses a data validation, mirroring and error/erasure correction method for the dispersal and protection of one and two-dimensional data at the micro level for computer, communication and storage systems. Each of 256 possible 8-bit data bytes are mirrored with a unique 8-bit ECC byte. The ECC enables 8-bit burst and 4-bit random error detection plus 2-bit random error correction for each encoded data byte. With the data byte and ECC byte configured into a 4 bit×4 bit codeword array and dispersed in either row, column or both dimensions the method can perform dual 4-bit row and column erasure recovery. It is shown that for each codeword there are 12 possible combinations of row and column elements called couplets capable of mirroring the data byte. These byte level micro-mirrors outperform conventional mirroring in that each byte and its ECC mirror can self-detect and self-correct random errors and can recover all dual erasure combinations over four elements. Encoding at the byte quanta level maximizes application flexibility. Also disclosed are fast encode, decode and reconstruction methods via boolean logic, processor instructions and software table look-up with the intent to run at line and application speeds. The new error control method can augment ARQ algorithms and bring resiliency to system fabrics including routers and links previously limited to the recovery of transient errors. Image storage and storage over arrays of static devices can benefit from the two-dimensional capabilities. Applications with critical data integrity requirements can utilize the method for end-to-end protection and validation. An extra ECC byte per codeword extends both the resiliency and dimensionality.
Owner:HALFORD ROBERT

Method for allocating files in a file system integrated with a RAID disk sub-system

The present invention is a method for integrating a file system with a RAID array that exports precise information about the arrangement of data blocks in the RAID subsystem. The file system examines this information and uses it to optimize the location of blocks as they are written to the RAID system. Thus, the system uses explicit knowledge of the underlying RAID disk layout to schedule disk allocation. The present invention uses separate current-write location (CWL) pointers for each disk in the disk array where the pointers simply advance through the disks as writes occur. The algorithm used has two primary goals. The first goal is to keep the CWL pointers as close together as possible, thereby improving RAID efficiency by writing to multiple blocks in the stripe simultaneously. The second goal is to allocate adjacent blocks in a file on the same disk, thereby improving read back performance. The present invention satisfies the first goal by always writing on the disk with the lowest CWL pointer. For the second goal, a new disk is chosen only when the algorithm starts allocating space for a new file, or when it has allocated N blocks on the same disk for a single file. A sufficient number of blocks is defined as all the buffers in a chunk of N sequential buffers in a file. The result is that CWL pointers are never more than N blocks apart on different disks, and large files have N consecutive blocks on the same disk.
Owner:NETWORK APPLIANCE INC

Novel massively parallel supercomputer

A novel massively parallel supercomputer of hundreds of teraOPS-scale includes node architectures based upon System-On-a-Chip technology, i.e., each processing node comprises a single Application Specific Integrated Circuit (ASIC). Within each ASIC node is a plurality of processing elements each of which consists of a central processing unit (CPU) and plurality of floating point processors to enable optimal balance of computational performance, packaging density, low cost, and power and cooling requirements. The plurality of processors within a single node may be used individually or simultaneously to work on any combination of computation or communication as required by the particular algorithm being solved or executed at any point in time. The system-on-a-chip ASIC nodes are interconnected by multiple independent networks that optimally maximizes packet communications throughput and minimizes latency. In the preferred embodiment, the multiple networks include three high-speed networks for parallel algorithm message passing including a Torus, Global Tree, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. For particular classes of parallel algorithms, or parts of parallel calculations, this architecture exhibits exceptional computational performance, and may be enabled to perform calculations for new classes of parallel algorithms. Additional networks are provided for external connectivity and used for Input / Output, System Management and Configuration, and Debug and Monitoring functions. Special node packaging techniques implementing midplane and other hardware devices facilitates partitioning of the supercomputer in multiple networks for optimizing supercomputing resources.
Owner:INT BUSINESS MASCH CORP

Data file migration from a mirrored RAID to a non-mirrored XOR-based RAID without rewriting the data

A data storage methodology wherein a data file is initially stored in a format consistent with RAID-1 and RAID-X and then migrated to a format consistent with RAID-X and inconsistent with RAID-1 when the data file grows in size beyond a certain threshold. Here, RAID-X refers to any non-mirrored storage scheme employing XOR-based error correction coding (e.g., a RAID-5 configuration). Each component object (including the data objects and the parity object) for the data file is configured to be stored in a different stripe unit per object-based secure disk. Each stripe unit may store, for example, 64 KB of data. So long as the data file does not grow beyond the size threshold of a stripe unit (e.g., 64 KB), the parity stripe unit contains a mirrored copy of the data stored in one of the data stripe units because of the exclusive-ORing of the input data with “all zeros” assumed to be contained in empty or partially-filled stripe units. When the file grows beyond the size threshold, the parity stripe unit starts storing parity information instead of a mirrored copy of the file data. Thus, the data file can be automatically migrated from a format consistent with RAID-1 and RAID-X to a format consistent with RAID-X and inconsistent with RAID-1 without the necessity to duplicate or rewrite the stored data.
Owner:PANASAS INC

Fast primary cluster recovery

A cluster recovery process is implemented across a set of distributed archives, where each individual archive is a storage cluster of preferably symmetric nodes. Each node of a cluster typically executes an instance of an application that provides object-based storage of fixed content data and associated metadata. According to the storage method, an association or “link” between a first cluster and a second cluster is first established to facilitate replication. The first cluster is sometimes referred to as a “primary” whereas the “second” cluster is sometimes referred to as a “replica.” Once the link is made, the first cluster's fixed content data and metadata are then replicated from the first cluster to the second cluster, preferably in a continuous manner. Upon a failure of the first cluster, however, a failover operation occurs, and clients of the first cluster are redirected to the second cluster. Upon repair or replacement of the first cluster (a “restore”), the repaired or replaced first cluster resumes authority for servicing the clients of the first cluster. This restore operation preferably occurs in two stages: a “fast recovery” stage that involves preferably “bulk” transfer of the first cluster metadata, following by a “fail back” stage that involves the transfer of the fixed content data. Upon receipt of the metadata from the second cluster, the repaired or replaced first cluster resumes authority for the clients irrespective of whether the fail back stage has completed or even begun.
Owner:HITACHI VANTARA LLC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products