Patents
Literature
Eureka-AI is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Eureka AI

5117 results about "Data buffer" patented technology

In computer science, a data buffer (or just buffer) is a region of a physical memory storage used to temporarily store data while it is being moved from one place to another. Typically, the data is stored in a buffer as it is retrieved from an input device (such as a microphone) or just before it is sent to an output device (such as speakers). However, a buffer may be used when moving data between processes within a computer. This is comparable to buffers in telecommunication. Buffers can be implemented in a fixed memory location in hardware—or by using a virtual data buffer in software, pointing at a location in the physical memory. In all cases, the data stored in a data buffer are stored on a physical storage medium. A majority of buffers are implemented in software, which typically use the faster RAM to store temporary data, due to the much faster access time compared with hard disk drives. Buffers are typically used when there is a difference between the rate at which data is received and the rate at which it can be processed, or in the case that these rates are variable, for example in a printer spooler or in online video streaming. In the distributed computing environment, data buffer is often implemented in the form of burst buffer that provides distributed buffering service.

System and method for secure and reliable multi-cloud data replication

A multi-cloud data replication method includes providing a data replication cluster comprising at least a first host node and at least a first online storage cloud. The first host node is connected to the first online storage cloud via a network and comprises a server, a cloud array application and a local cache. The local cache comprises a buffer and a first storage volume comprising data cached in one or more buffer blocks of the local cache's buffer. Next, requesting authorization to perform cache flush of the cached first storage volume data to the first online storage cloud. Upon receiving approval of the authorization, encrypting the cached first storage volume data in each of the one or more buffer blocks with a data private key. Next, assigning metadata comprising at lest a unique identifier to each of the one or more buffer blocks and then encrypting the metadata with a metadata private key. Next, transmitting the one or more buffer blocks with the encrypted first storage volume data to the first online cloud storage. Next, creating a sequence of updates of the metadata, encrypting the sequence with the metadata private key and then transmitting the sequence of metadata updates to the first online storage cloud.
Owner:EMC IP HLDG CO LLC

Data processing apparatus and method for performing hazard detection

A data processing apparatus and method are provided for performing hazard detection in respect of a series of access requests issued by processing circuitry for handling by one or more slave devices. The series of access requests include one or more write access requests, each write access request specifying a write operation to be performed by an addressed slave device, and each issued write access request being a pending write access request until the write operation has been completed by the addressed slave device. Hazard detection circuitry comprises a pending write access history storage having at least one buffer and at least one counter for keeping a record of each pending write access request. Update circuitry is responsive to receipt of a write access request to be issued by the processing circuitry, to perform an update process to identify that write access request as a pending write access request in one of the buffers, and if the identity of another pending write access request is overwritten by that update process, to increment a count value in one of the counters. On completion of each write access request by the addressed slave device, the update circuitry performs a further update process to remove the record of that completed write access request from the pending write access history storage. Hazard checking circuitry is then responsive to at least a subset of the access requests to be issued by the processing circuitry, to reference the pending write access history storage in order to determine whether a hazard condition occurs. The manner in which the update circuitry uses a combination of buffers and counters to keep a record of each pending write access request provides improved performance with respect to known prior art techniques, without the hardware cost that would be associated with increasing the number of buffers.
Owner:ARM LTD

Flexible engine and data structure for packet header processing

A pipelined linecard architecture for receiving, modifying, switching, buffering, queuing and dequeuing packets for transmission in a communications network. The linecard has two paths: the receive path, which carries packets into the switch device from the network, and the transmit path, which carries packets from the switch to the network. In the receive path, received packets are processed and switched in an asynchronous, multi-stage pipeline utilizing programmable data structures for fast table lookup and linked list traversal. The pipelined switch operates on several packets in parallel while determining each packet's routing destination. Once that determination is made, each packet is modified to contain new routing information as well as additional header data to help speed it through the switch. Each packet is then buffered and enqueued for transmission over the switching fabric to the linecard attached to the proper destination port. The destination linecard may be the same physical linecard as that receiving the inbound packet or a different physical linecard. The transmit path consists of a buffer/queuing circuit similar to that used in the receive path. Both enqueuing and dequeuing of packets is accomplished using CoS-based decision making apparatus and congestion avoidance and dequeue management hardware. The architecture of the present invention has the advantages of high throughput and the ability to rapidly implement new features and capabilities.
Owner:CISCO TECH INC

High-speed hardware implementation of MDRR algorithm over a large number of queues

A pipelined linecard architecture for receiving, modifying, switching, buffering, queuing and dequeuing packets for transmission in a communications network. The linecard has two paths: the receive path, which carries packets into the switch device from the network, and the transmit path, which carries packets from the switch to the network. In the receive path, received packets are processed and switched in an asynchronous, multi-stage pipeline utilizing programmable data structures for fast table lookup and linked list traversal. The pipelined switch operates on several packets in parallel while determining each packet's routing destination. Once that determination is made, each packet is modified to contain new routing information as well as additional header data to help speed it through the switch. Each packet is then buffered and enqueued for transmission over the switching fabric to the linecard attached to the proper destination port. The destination linecard may be the same physical linecard as that receiving the inbound packet or a different physical linecard. The transmit path consists of a buffer/queuing circuit similar to that used in the receive path. Both enqueuing and dequeuing of packets is accomplished using CoS-based decision making apparatus and congestion avoidance and dequeue management hardware. The architecture of the present invention has the advantages of high throughput and the ability to rapidly implement new features and capabilities.
Owner:CISCO TECH INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products