Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

58 results about "Cache invalidation" patented technology

Cache invalidation is a process in a computer system whereby entries in a cache are replaced or removed. It can be done explicitly, as part of a cache coherence protocol. In such a case, a processor changes a memory location and then invalidates the cached values of that memory location across the rest of the computer system.

Method and system for limiting the use of user-specific software features

A server architecture for a digital rights management system that distributes and protects rights in content. The server architecture includes a retail site which sells content items to consumers, a fulfillment site which provides to consumers the content items sold by the retail site, and an activation site which enables consumer reading devices to use content items having an enhanced level of copy protection. Each retail site is equipped with a URL encryption object, which encrypts, according to a secret symmetric key shared between the retail site and the fulfillment site, information that is needed by the fulfillment site to process an order for content sold by the retail site. Upon selling a content item, the retail site transmits to the purchaser a web page having a link to a URL comprising the address of the fulfillment site and a parameter having the encrypted information. Upon following the link, the fulfillment site downloads the ordered content to the consumer, preparing the content if necessary in accordance with the type of security to be carried with the content. The fulfillment site includes an asynchronous fulfillment pipeline which logs information about processed transactions using a store-and-forward messaging service. The fulfillment site may be implemented as several server devices, each having a cache which stores frequently downloaded content items, in which case the asynchronous fulfillment pipeline may also be used to invalidate the cache if a change is made at one server that affects the cached content items. An activation site provides an activation certificate and a secure repository executable to consumer content-rendering devices which enables those content rendering devices to render content having an enhanced level of copy-resistance. The activation site “activates” client-reading devices in a way that binds them to a persona, and limits the number of devices that may be activated for a particular persona, or the rate at which such devices may be activated for a particular persona.
Owner:MICROSOFT TECH LICENSING LLC

Storage area network file system

A shared storage distributed file system is presented that provides applications with transparent access to a storage area network (SAN) attached storage device. This is accomplished by providing clients read access to the devices over the SAN and by requiring most write activity to be serialized through a network attached storage (NAS) server. Both the clients and the NAS server are connected to the SAN-attached device over the SAN. Direct read access to the SAN attached device is provided through a local file system on the client. Write access is provided through a remote file system on the client that utilizes the NAS server. A supplemental read path is provided through the NAS server for those circumstances where the local file system is unable to provide valid data reads. Consistency is maintained by comparing modification times in the local and remote file systems. Since writes occur over the remote file systems, the consistency mechanism is capable of flushing data caches in the remote file system, and invalidating metadata and real-data caches in the local file system. It is possible to utilize unmodified local and remote file systems in the present invention, by layering over the local and remote file systems a new file system. This new file system need only be installed at each client, allowing the NAS server file systems to operate unmodified. Alternatively, the new file system can be combined with the local file system.
Owner:DATAPLOW

Method and device for determining tasks to be migrated based on cache perception

ActiveCN103729248AReduce the probability of resource contentionImprove performanceResource allocationOperational systemCache invalidation
The invention discloses a method for determining tasks to be migrated based on cache perception. The method comprises the steps that a source processor core and a target processor core are determined according to loads of each processor core; the number of times of cache invalidation and the number of executed orders of each task in the source processor core and the target processor core are monitored to obtain the number of times of cache invalidation of thousands of orders of each task in the source processor core and the target processor core; the average number of times of cache invalidation of thousands of orders of the source processor core and the average number of times of cache invalidation of thousands of orders of the target processor core are obtained; the tasks needing to be migrated from the source processor core to the target processor core are determined according to the average number of times of cache invalidation of thousands of orders of the source processor core and the average number of times of cache invalidation of thousands of orders of the target processor core. According to the method for determining the tasks to be migrated, an operating system can perceive the behavior of programs, and more reasonable tasks can be selected when the tasks are migrated. The invention further discloses a device for determining the tasks to be migrated based on cache perception.
Owner:HUAWEI TECH CO LTD +1

Processor Cache write-in invalidation processing method based on memory access history learning

A processor Cache write-in invalidation processing method based on memory access history learning includes the following procedures: (1) Cache invalidation preprocessing procedure; (2) Cache write-allocation strategy setting procedure includes that: the immediate write-allocation or delay write-allocation strategy of each group is set; (3) the group belonging to the immediate write-allocation immediately accesses the Cache block corresponding to the memory and reads the missing data of the group back, and integrates the missing data with the write-in data to form complete Cache block data, and writes the complete Cache block data into the corresponding Cache block; the group belonging to the delay write-allocation collects the write-in data of Cache write-in invalidation operation allocated in the group, and writes the write-in data directly into the corresponding Cache block when the write-in data of a certain group is fully collected in the whole Cache block. The invention can reduce great unnecessary operations of reading Cache block from the memory during the processing process of the Cache write-in invalidation, accordingly reduces the bandwidth waste of the process and further improves the performance of the application program.
Owner:LOONGSON TECH CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products