Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

4284 results about "Linked list" patented technology

In computer science, a linked list is a linear collection of data elements, whose order is not given by their physical placement in memory. Instead, each element points to the next. It is a data structure consisting of a collection of nodes which together represent a sequence. In its most basic form, each node contains: data, and a reference (in other words, a link) to the next node in the sequence. This structure allows for efficient insertion or removal of elements from any position in the sequence during iteration. More complex variants add additional links, allowing more efficient insertion or removal of nodes at arbitrary positions. A drawback of linked lists is that access time is linear (and difficult to pipeline). Faster access, such as random access, is not feasible. Arrays have better cache locality compared to linked lists.

System for directly accessing fields on electronic forms

A computer system provides a graphical user interface (GUI) to assist a user in completing electronic forms. The computer includes components such as a processor, user interface, and video display. Using the video display, the processor presents a row entry template including a menu field and an associated data field. The user completes the menu field by selecting a desired menu entry from a list of predefined menu entries. The user completes the data field by entering data into the data field. This format is especially useful when the data entry provides data categorized by the menu entry, explains the menu entry, or otherwise pertains to the menu entry. Each time the GUI detects activation of a form expand key, it presents an additional row entry template for completion by the user. Upon selection of a submit key, data of the completed form is sent to a predefined destination, such as a linked list, table, database, or another computer. Thus, by planned selection of menu entries, the user can limit his/her completion of an electronic form to blanks applicable to that user, avoiding the others. Nonetheless, the form can be easily expanded row by row to accommodate as many different blanks as the user wishes to complete. The invention may be implemented by a host sending a remote computer machine-executable instructions which the remote computer executes to provide the GUI, where the remote computer ultimately returns the completed form data to the host.
Owner:IBM CORP

Method of saving a web page to a local hard drive to enable client-side browsing

A method of copying a Web page presented for display on a browser of a Web client. The Web page comprises a base HTML document and a plurality of hypertext references, one or more of which may be associated with embedded objects (such as image files). The operation begins by copying the base HTML document to the client local storage and establishing a pointer to the copied document. A first linked list of the hypertext references in the base document is then generated. Thereafter, and for each hypertext reference in the first linked list, the following operations are performed. If the hypertext reference refers to an embedded object in the base HTML document, the embedded object is saved on the client local storage and the file name of the saved embedded object is stored (as a fully-qualified URL) in a second linked list. If the hypertext reference does not refer to an embedded object in the base HTML document, the fully-qualified URL of the hypertext reference is stored in the second linked list. Then, the fully-qualified URLs of the second linked list (including those associated with the stored images) are updated to point to the files located on the client local storage. At the end of this operation, there is a new HTML page with links for images pointing to files on the local hard drive. When the user desires to retrieve the copied page, a link to the pointer is activated.
Owner:IBM CORP

Flexible engine and data structure for packet header processing

A pipelined linecard architecture for receiving, modifying, switching, buffering, queuing and dequeuing packets for transmission in a communications network. The linecard has two paths: the receive path, which carries packets into the switch device from the network, and the transmit path, which carries packets from the switch to the network. In the receive path, received packets are processed and switched in an asynchronous, multi-stage pipeline utilizing programmable data structures for fast table lookup and linked list traversal. The pipelined switch operates on several packets in parallel while determining each packet's routing destination. Once that determination is made, each packet is modified to contain new routing information as well as additional header data to help speed it through the switch. Each packet is then buffered and enqueued for transmission over the switching fabric to the linecard attached to the proper destination port. The destination linecard may be the same physical linecard as that receiving the inbound packet or a different physical linecard. The transmit path consists of a buffer/queuing circuit similar to that used in the receive path. Both enqueuing and dequeuing of packets is accomplished using CoS-based decision making apparatus and congestion avoidance and dequeue management hardware. The architecture of the present invention has the advantages of high throughput and the ability to rapidly implement new features and capabilities.
Owner:CISCO TECH INC

Pipelined packet switching and queuing architecture

A pipelined linecard architecture for receiving, modifying, switching, buffering, queuing and dequeuing packets for transmission in a communications network. The linecard has two paths: the receive path, which carries packets into the switch device from the network, and the transmit path, which carries packets from the switch to the network. In the receive path, received packets are processed and switched in a multi-stage pipeline utilizing programmable data structures for fast table lookup and linked list traversal. The pipelined switch operates on several packets in parallel while determining each packet's routing destination. Once that determination is made, each packet is modified to contain new routing information as well as additional header data to help speed it through the switch. Using bandwidth management techniques, each packet is then buffered and enqueued for transmission over the switching fabric to the linecard attached to the proper destination port. The destination linecard may be the same physical linecard as that receiving the inbound packet or a different physical linecard. The transmit path includes a buffer / queuing circuit similar to that used in the receive path and can include another pipelined switch. Both enqueuing and dequeuing of packets is accomplished using CoS-based decision making apparatus, congestion avoidance, and bandwidth management hardware.
Owner:CISCO TECH INC

High-speed hardware implementation of MDRR algorithm over a large number of queues

A pipelined linecard architecture for receiving, modifying, switching, buffering, queuing and dequeuing packets for transmission in a communications network. The linecard has two paths: the receive path, which carries packets into the switch device from the network, and the transmit path, which carries packets from the switch to the network. In the receive path, received packets are processed and switched in an asynchronous, multi-stage pipeline utilizing programmable data structures for fast table lookup and linked list traversal. The pipelined switch operates on several packets in parallel while determining each packet's routing destination. Once that determination is made, each packet is modified to contain new routing information as well as additional header data to help speed it through the switch. Each packet is then buffered and enqueued for transmission over the switching fabric to the linecard attached to the proper destination port. The destination linecard may be the same physical linecard as that receiving the inbound packet or a different physical linecard. The transmit path consists of a buffer/queuing circuit similar to that used in the receive path. Both enqueuing and dequeuing of packets is accomplished using CoS-based decision making apparatus and congestion avoidance and dequeue management hardware. The architecture of the present invention has the advantages of high throughput and the ability to rapidly implement new features and capabilities.
Owner:CISCO TECH INC

High-speed hardware implementation of red congestion control algorithm

A pipelined linecard architecture for receiving, modifying, switching, buffering, queuing and dequeuing packets for transmission in a communications network. The linecard has two paths: the receive path, which carries packets into the switch device from the network, and the transmit path, which carries packets from the switch to the network. In the receive path, received packets are processed and switched in an asynchronous, multi-stage pipeline utilizing programmable data structures for fast table lookup and linked list traversal. The pipelined switch operates on several packets in parallel while determining each packet's routing destination. Once that determination is made, each packet is modified to contain new routing information as well as additional header data to help speed it through the switch. Each packet is then buffered and enqueued for transmission over the switching fabric to the linecard attached to the proper destination port. The destination linecard may be the same physical linecard as that receiving the inbound packet or a different physical linecard. The transmit path consists of a buffer/queuing circuit similar to that used in the receive path. Both enqueuing and dequeuing of packets is accomplished using CoS-based decision making apparatus and congestion avoidance and dequeue management hardware. The architecture of the present invention has the advantages of high throughput and the ability to rapidly implement new features and capabilities.
Owner:CISCO TECH INC

Methods and systems of cache memory management and snapshot operations

The present invention relates to a cache memory management system suitable for use with snapshot applications. The system includes a cache directory including a hash table, hash table elements, cache line descriptors, and cache line functional pointers, and a cache manager running a hashing function that converts a request for data from an application to an index to a first hash table pointer in the hash table. The first hash table pointer in turn points to a first hash table element in a linked list of hash table elements where one of the hash table elements of the linked list of hash table elements points to a first cache line descriptor in the cache directory and a cache memory including a plurality of cache lines, wherein the first cache line descriptor has a one-to-one association with a first cache line. The present invention also provides for a method converting a request for data to an input to a hashing function, addressing a hash table based on a first index output from the hashing function, searching the hash table elements pointed to by the first index for the requested data, determining the requested data is not in cache memory, and allocating a first hash table element and a first cache line descriptor that associates with a first cache line in the cache memory.
Owner:ORACLE INT CORP

Flexible DMA engine for packet header modification

A pipelined linecard architecture for receiving, modifying, switching, buffering, queuing and dequeuing packets for transmission in a communications network. The linecard has two paths: the receive path, which carries packets into the switch device from the network, and the transmit path, which carries packets from the switch to the network. In the receive path, received packets are processed and switched in an asynchronous, multi-stage pipeline utilizing programmable data structures for fast table lookup and linked list traversal. The pipelined switch operates on several packets in parallel while determining each packet's routing destination. Once that determination is made, each packet is modified to contain new routing information as well as additional header data to help speed it through the switch. Each packet is then buffered and enqueued for transmission over the switching fabric to the linecard attached to the proper destination port. The destination linecard may be the same physical linecard as that receiving the inbound packet or a different physical linecard. The transmit path consists of a buffer/queuing circuit similar to that used in the receive path. Both enqueuing and dequeuing of packets is accomplished using CoS-based decision making apparatus and congestion avoidance and dequeue management hardware. The architecture of the present invention has the advantages of high throughput and the ability to rapidly implement new features and capabilities.
Owner:CISCO TECH INC

Methods and apparatus for storing and manipulating variable length and fixed length data elements as a sequence of fixed length integers

Apparatus for storing and processing a plurality of data items each comprising supplied data values organized in one or more fields each of which stores typed data. Character strings and natural language text are converted to numerical token values in an array of fixed length integers and other forms of typed data (real numbers, dates, times, boolean values, etc.) are also converted to integer form and stored in the array. Stored metadata specifies the data type of all data in the integer array to enable each integer to be rapidly accessed and interpreted. When fixed length data types are present, the metadata specifies location, size and type of each fixed length element. When variable length data is stored in the integer array, size and location data stored in the integer array is accessed to rapidly and directly access the variable size data. The presence of implicit or explicit size information for each data structure, including variable size structures, speeds processing by eliminating the need to scan the data for delimiters, and by reducing the processing needed to perform memory allocation, data movement, lookup operations and data addressing functions. Data stored in the integer array is subdivided into items, and items are subdivided into fields. Items may be organized into more complex data structures, such as relational tables, hierarchical object structures, linked lists and trees, and the like, using special fields called links which identify other referenced items.
Owner:CALL CHARLES G
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products