Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

2985 results about "Data cache" patented technology

Data Caching : Data caching means caching data from a data source. As long as the cache is not expired, a request for the data will be fulfilled from the cache. When the cache is expired, fresh data is obtained by the data source and the cache is refilled.

Method and apparatus for efficient scalable storage management

A hybrid centralized and distributed processing system includes a switching device that connects a storage processor to one or more servers through a host channel processor. The switching device also connects the storage processor to one or more storage devices such as disk drive arrays, and to a metadata cache and a block data cache memory. The storage processor processes access request from one or more servers in the form of a logical volume or logical block address and accesses the metadata cache to determine the physical data address. The storage processor monitors the performance of the storage system and performs automatic tuning by reallocating the logical volume, load balancing, hot spot removal, and dynamic expansion of storage volume. The storage processor also provides fault-tolerant access and provides parallel high performance data paths for fail over. The storage processor also provides faster access by providing parallel data paths for, making local copies and providing remote data copies, and by selecting data from a storage device that retrieves the data the earliest.
Owner:COPAN SYST INC +1

Load balancing technique implemented in a data network device utilizing a data cache

A technique for implementing a load balanced server farm system is described which may be used for effecting electronic commerce over a data network. The system comprises a load balancing system and a plurality of servers in communication with the load balancing system. Each of the plurality of servers may include a respective data cache for storing state information relating to client session transactions conducted between the server and a particular client. The load balancing system is configured to select, using a load balancing protocol, an available first server from the plurality of servers to process an initial packet received from a source device such as, for example, a client machine of a customer. The load balancing system is also configured to route subsequent packets received from the source device to the first server. In this way, a “stickiness” scheme may be implemented in the server farm system whereby, once an electronic commerce session has been initiated between the first server and the source device, the first server may handle all subsequent requests from the source device in order to make optimal use of the state data stored in the first server's data cache. Before generating its response, the first server may verify that the state information relating to a specific client session stored in the data cache is up-to-date. If the first server determines that the state information stored in the data cache is not up-to-date, then the first server may be configured to retrieve the desired up-to-date state information from a database which is configured to store all state information relating to client sessions which have been initiated with the server farm system.
Owner:JUNE RAY

Cache memory background preprocessing

A cache memory preprocessor prepares a cache memory for use by a processor. The processor accesses a main memory via a cache memory, which serves a data cache for the main memory. The cache memory preprocessor consists of a command inputter, which receives a multiple-way cache memory processing command from the processor, and a command implementer. The command implementer performs background processing upon multiple ways of the cache memory in order to implement the cache memory processing command received by the command inputter.
Owner:ANALOG DEVICES INC

Digital camera system containing a VLIW vector processor

A digital camera has a sensor for sensing an image, a processor for modifying the sensed image in accordance with instructions input into the camera and an output for outputting the modified image where the processor includes a series of processing elements arranged around a central crossbar switch. The processing elements include an Arithmetic Logic Unit (ALU) acting under the control of a writeable microcode store, an internal input and output FIFO for storing pixel data to be processed by the processing elements and the processor is interconnected to a read and write FIFO for reading and writing pixel data of images to the processor. Each of the processing elements can be arranged in a ring and each element is also separately connected to its nearest neighbors. The ALU receives a series of inputs interconnected via an internal crossbar switch to a series of core processing units within the ALU and includes a number of internal registers for the storage of temporary data. The core processing units can include at least one of a multiplier, an adder and a barrel shifter. The processing elements are further connected to a common data bus for the transfer of a pixel data to the processing elements and the data bus is interconnected to a data cache which acts as an intermediate cache between the processing elements and a memory store for storing the images.
Owner:GOOGLE LLC

Method of efficient dynamic data cache prefetch insertion

InactiveUS20030145314A1Reducing subsequent cache missesMemory architecture accessing/allocationSoftware engineeringCache missData cache
A system and method for dynamically inserting a data cache prefetch instruction into a program executable to optimize the program being executed. The method, and system thereof, monitors the execution of the program, samples on the cache miss events, identifies the time-consuming execution paths, and optimizes the program during runtime by inserting a prefetch instruction into a new optimized code to hide cache miss latency.
Owner:SUN MICROSYSTEMS INC

Load balancing technique implemented in a data network device utilizing a data cache

Techniques for implementing a load balanced server system are described which may be used for effecting electronic commerce over a data network. The system comprises a load balancing system and a plurality of servers in communication with the load balancing system. Each of the plurality of servers may include a respective data cache for storing state information relating to client session transactions conducted between the server and a particular client. The load balancing system can be configured to select, using a load balancing protocol, an available first server from the plurality of servers to process an initial packet received from a source device such as, for example, a client machine of a customer. The load balancing system can also configured to route subsequent packets received from the source device to the first server. Before generating its response, the first server may verify that the state information relating to a specific client session stored in the data cache is up-to-date. If the first server determines that the state information stored in the data cache is not up-to-date, then the first server may be configured to retrieve the desired up-to-date state information from a database which is configured to store all state information relating to client sessions which have been initiated with the server system.
Owner:JUNE RAY

Caching scheme for multi-dimensional data

A system, method, and a computer program product for caching multi-dimensional data based on an assumption of locality of reference. A user sends a query for data. A described compilation module converts the query into a set of cubelet addresses and canonical addresses. In the described embodiment, if the data corresponding to the cubelet address is found in a data cache, the data cache returns the cubelet, which may contain the requested data and data for "nearby" cells. The data corresponding to the canonical addresses is extracted from the returned cubelet. If the data is not found in a data cache, a fault handler queries a back-end database for the cubelet identified by the cubelet address. This cubelet includes the requested data and data for "nearby" cells. The requested data and the data for "nearby cells" are in the form of values of measure attributes and associated canonical addresses. The returned cubelet is then cached and the data corresponding to the canonical addresses is extracted.
Owner:IBM CORP

Method and system for dynamically partitioning very large database indices on write-once tables

Methods and systems for partitioning and dynamically merging a database index are described. A database index includes a single first-level index partition stored in a data cache. As the first-level index partition in the data cache reaches a predetermined size, it is copied to secondary storage and a new index partition is generated in the data cache. When the number of index partitions in secondary storage reaches some predetermined number, the index partitions are merged to create a single index partition of a higher level in a hierarchy of index partitions having an exponentially increasing size with each increase in level within the hierarchy.
Owner:SAP AG

Threshold-based load address prediction and new thread identification in a multithreaded microprocessor

A method and apparatus for predicting load addresses and identifying new threads of instructions for execution in a multithreaded processor. A load prediction unit scans an instruction window for load instructions. A load prediction table is searched for an entry corresponding to a detected load instruction. If an entry is found in the table, a load address prediction is made for the load instruction and conveyed to the data cache. If the load address misses in the cache, the data is prefetched. Subsequently, if it is determined that the load prediction was incorrect, a miss counter in the corresponding entry in the load prediction table is incremented. If on a subsequent detection of the load instruction, the miss counter has reached a threshold, the load instruction is predicted to miss. In response to the predicted miss, a new thread of instructions is identified for execution.
Owner:ORACLE INT CORP

Memory controller with write data cache and read data cache

A memory controller 14 includes a write data cache 18, a read data cache 20 and coherency circuitry 22. The coherency circuitry 22 manages coherency of data between the write data cache 18, the read data cache 20 and data stored within a main memory 16 when servicing read requests and write requests received by the memory controller 14. Write complete signals are issued back to a write requesting circuit as soon as a write request has had its write data stored within the write data cache 18.
Owner:ARM LTD

Storage area network file system

A shared storage distributed file system is presented that provides applications with transparent access to a storage area network (SAN) attached storage device. This is accomplished by providing clients read access to the devices over the SAN and by requiring most write activity to be serialized through a network attached storage (NAS) server. Both the clients and the NAS server are connected to the SAN-attached device over the SAN. Direct read access to the SAN attached device is provided through a local file system on the client. Write access is provided through a remote file system on the client that utilizes the NAS server. A supplemental read path is provided through the NAS server for those circumstances where the local file system is unable to provide valid data reads.Consistency is maintained by comparing modification times in the local and remote file systems. Since writes occur over the remote file systems, the consistency mechanism is capable of flushing data caches in the remote file system, and invalidating metadata and real-data caches in the local file system. It is possible to utilize unmodified local and remote file systems in the present invention, by layering over the local and remote file systems a new file system. This new file system need only be installed at each client, allowing the NAS server file systems to operate unmodified. Alternatively, the new file system can be combined with the local file system.
Owner:DATAPLOW

System and Methodology Providing Multiple Heterogeneous Buffer Caches

A method for temporarily storing data objects in memory of a distributed system comprising a plurality of servers sharing access to data comprises steps of: reserving memory at each of the plurality of servers as a default data cache for storing data objects; in response to user input, allocating memory of at least one of the plurality of servers as a named cache reserved for storing a specified type of data object; in response to an operation at a particular server requesting a data object, determining whether the requested data object is of the specified type corresponding to the named cache at the particular server; if the data object is determined to be of the specified type corresponding to the named cache, storing the requested data object in the named cache at the particular server; and otherwise, using the default data cache for storing the requested data object.
Owner:SAP AMERICA

Storage area network file system

A shared storage distributed file system is presented that provides applications with transparent access to a storage area network (SAN) attached storage device. This is accomplished by providing clients read access to the devices over the SAN and by requiring most write activity to be serialized through a network attached storage (NAS) server. Both the clients and the NAS server are connected to the SAN-attached device over the SAN. Direct read access to the SAN attached device is provided through a local file system on the client. Write access is provided through a remote file system on the client that utilizes the NAS server. A supplemental read path is provided through the NAS server for those circumstances where the local file system is unable to provide valid data reads. Consistency is maintained by comparing modification times in the local and remote file systems. Since writes occur over the remote file systems, the consistency mechanism is capable of flushing data caches in the remote file system, and invalidating metadata and real-data caches in the local file system. It is possible to utilize unmodified local and remote file systems in the present invention, by layering over the local and remote file systems a new file system. This new file system need only be installed at each client, allowing the NAS server file systems to operate unmodified. Alternatively, the new file system can be combined with the local file system.
Owner:DATAPLOW
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products