Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

72 results about "CPU cache" patented technology

A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory. A cache is a smaller, faster memory, located closer to a processor core, which stores copies of the data from frequently used main memory locations. Most CPUs have different independent caches, including instruction and data caches, where the data cache is usually organized as a hierarchy of more cache levels (L1, L2, L3, L4, etc.).

System and methods for CPU copy protection of a computing device

The present disclosure relates to techniques for system and methods for software-based management of protected data-blocks insertion into the memory cache mechanism of a computerized device. In particular the disclosure relates to preventing protected data blocks from being altered and evicted from the CPU cache coupled with buffered software execution. The technique is based upon identifying at least one conflicting data-block having a memory mapping indication to a designated memory cache-line and preventing the conflicting data-block from being cached. Functional characteristics of the software product of a vendor, such as gaming or video, may be partially encrypted to allow for protected and functional operability and avoid hacking and malicious usage of non-licensed user.
Owner:TRULY PROTECT

Virtual CPU scheduling method capable of enhancing real-time performance

The invention discloses a virtual CPU scheduling method capable of enhancing real-time performance. The virtual CPU scheduling method comprises the following steps: a virtual machine management control tool accepts a command that a user operates a virtual machine and schedule parameters; the management control tool judges whether a user command is related with a real-time virtual machine, conditions for meeting schedulability of the real-time virtual machine are calculated if the user command is related with the real-time virtual machine, and physical CPU resources are dynamically partitioned according to the calculation result; scheduling is carried out by adopting the virtual CPU scheduling method capable of enhancing real-time performance, a global earliest deadline priority scheduling algorithm is applied to a CPU resource pool operating the real-time virtual machine, and a limit scheduling algorithm is applied to a CPU resource pool operating a non-real time virtual machine; optimization of virtual CPU caching and hitting is conducted on the global earliest deadline priority scheduling algorithm. According to the virtual CPU scheduling method capable of enhancing real-time performance, real-time performance of the real-time virtual machine is guaranteed, a good isolation property is provided for partitioning the CPU recourses, and influence on performance of the non-real time virtual machine is reduced.
Owner:HUAZHONG UNIV OF SCI & TECH

Method, device and system for forwarding network data messages

The invention provides a method, a device and a system for forwarding network data messages. The method comprises the following steps: acquiring data messages in a data stream according to the received data stream; performing network address translation (NAT) strategy matching according to the first data message in the data stream, and establishing an NAT relation table; and determining a central processing unit (CPU) for correspondingly forwarding other data messages except the first data message according to quintuple information before and after the other data messages except the first data message in the data stream are translated and the NAT relation table. By adopting the method, the device and the system, the data messages in the same data stream can be distributed to the same CPU for processing in the presence or absence of a network address port translation (NAPT) / NAT scene, and use of a lock and jitter of a CPU cache are reduced, so that the network forwarding performance is promoted.
Owner:NEUSOFT CORP

Method and device for improving audio quality, and medium

The invention discloses a method and device for improving audio quality and a readable storage medium. The method comprises the following steps: receiving audio data, and storing the audio data in a CPU cache; judging whether the CPU cache has a residual space or not; in response to the fact that the CPU cache does not have the residual space, storing the audio data into an SSD cache; judging whether the total voice flow stored in the SSD cache is smaller than a first threshold value of network link transmission voice or not; in response to the fact that the total voice flow stored in the SSDcache is smaller than a first threshold value of network link transmission voice, judging whether the time delay is smaller than a second threshold value or not; and in response to the fact that the time delay is not smaller than the second threshold, reducing the SSD cache size to reduce the time delay. According to the method and device for improving the audio quality and the medium provided bythe invention, the multi-level cache is designed, and the size of the secondary cache is dynamically adjusted according to the voice time delay, the uplink and downlink voice flow and the voice compression condition, so that the best overall performance of the voice jitter quality and the voice time delay is ensured, and the audio quality is improved to the maximum extent.
Owner:INSPUR SUZHOU INTELLIGENT TECH CO LTD

Method and system for protecting CPU Cache data after AC power failure

The invention discloses a method for protecting CPU Cache data after AC power failure. The method comprises the steps that a CPLD sends a data protection command to a CPU after receiving AC power failure information; and the CPU flashes data in a CPU Cache to a non-volatile storage medium after receiving the data protection instruction, thereby performing a subsequent ADR process after finishing the flashing. According to the method provided by the invention, the data in the CPU Cache can be effectively protected after the AC power failure only by adding the process of flashing the data in theCPU Cache to the non-volatile storage medium after the AC power failure on the premise of not changing the read-write performance of the non-volatile storage medium, so that the data in the CPU Cacheis protected from being not lost, the read-write performance of NVDIMM-N or other non-volatile storage mediums is not influenced, and the delay is shortened. The invention furthermore discloses a system for protecting the CPU Cache data after the AC power failure. The system also has the abovementioned beneficial effects, which are no longer repeated here.
Owner:ZHENGZHOU YUNHAI INFORMATION TECH CO LTD

Block device data cache power-down protection method and system

The invention discloses a block device data cache power-down protection method and system. The method comprises the steps of setting a block device information protection region, a cache unit information protection region and a cache unit protection region in a power-down protection memory region; setting the block device information protection region, the cache unit information protection region and the cache unit protection region to be in a CPU direct-writing or no-cache mode; when a system does not lose power, storing block device cache data in a cache unit of the cache unit protection region; recording corresponding block device information in the block device information protection region and recording corresponding cache unit information in the cache unit information protection region; and when the system loses the power, according to the content recorded in the block device information protection region, the cache unit information protection region and the cache unit protection region, identifying a block device required to be subjected to data back-writing, and writing back the corresponding data of the block device into a storage medium. Therefore, the problems on asynchronism and protection of CPU caches can be solved and the complete data power-down protection is ensured; and the method and system have relatively good performance.
Owner:INSPUR BEIJING ELECTRONICS INFORMATION IND

Data search method

The invention relates to a data search method. Data is organized and stored according to the fractal tree principle. The data search method includes: by a CPU, receiving a data reading request; searching the requested data in Cache Line; if the requested data is found in the Cache Line, reading the data and terminating the searching; if the requested is not found in the Cache Line, determining the possible storage position of the data in CPU Cache through numeric comparison; searching for the data in the corresponding interval in the CPU Cache according to the possible storage position of the data in the CPU Cache; if the requested data is found in the CPU Cache, reading the data and terminating the searching; if the requested data is not found in the CPU Cache, determining the possible storage position of the data in an internal memory; searching for the data in the corresponding interval in the internal memory according to the possible storage position of the data in the internal memory; if the requested data is found in the internal memory, reading the data and terminating the searching; if the requested data is not found in the internal memory, searching the data in a hard disk. By the data search method, the times of data exchanging between the Cache and the internal memory can be reduced, and CPU speed can be increased.
Owner:量子云未来(北京)信息科技有限公司 +1

Self-adaptive testing method and device of CPU cache memory

The invention provides a self-adaptive testing method and device of CPU cache memory. The method includes: setting the frequency allocation corresponding to a test target frequency before testing; when testing starts, injecting a bist start command through a jtag interface; conducting protocol parsing on a jtag command into a direct control signal; performing an EMA (extra margin adjustment) scanning equipped bist testing process through the direct control signal, if optimal EMA is found in a bist testing process, judging the test passes, at the same time internal EEPROM can store an optimal EMA configuration value under the frequency for a chip to use while entering a normal work mode; otherwise judging the test fails, and classifying the chip into an unsatisfactory chip. The optimal EMA value is obtained through testing to serve the configuration use of the chip during normal work, so that the chip can work under the optimal EMA value specific to itself and acquire an equilibrium point of the best memory performance and stability.
Owner:FUZHOU ROCKCHIP SEMICON

Method for prefetching CPU cache data based on merged address difference value sequence

The invention provides a CPU cache data prefetching method based on a merged address difference value sequence, which comprises the following steps of: collecting current memory access information of a to-be-prefetched data cache, and trying to obtain information from a historical information table to update and obtain a current difference value sequence section when the current memory access information is collected; according to the current difference value sequence segment, updating the historical information table, the difference value mapping array and the difference value sequence segment sub-table, and removing a first difference value to obtain a to-be-predicted difference value sequence; performing multiple matching on prefix subsequences of complete sequence segments stored in the dynamic mapping mode table by using the to-be-predicted difference value sequence to obtain an optimally matched complete sequence segment and a corresponding prediction target difference value; and adding the predicted target difference value to a memory access address in the current memory access information to obtain a predicted target address. The problem that in the prior art, multi-table cascading is needed to store the multi-length difference value sequence is solved, only one table is used for storage, and the storage and query logic of a memory access mode is simplified.
Owner:SHANGHAI ADVANCED RES INST CHINESE ACADEMY OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products