Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

176 results about "Memory load" patented technology

Load memory is non-volatile storage for the user program, data and configuration. When a project is downloaded to the CPU, it is first stored in the Load memory area. This area is located either in a memory card (if present) or in the CPU.

Method and device for realizing data center resource load balance in proportion to comprehensive allocation capability

ActiveCN102185779ATimely determination of load statusSolve load imbalanceData switching networksUser needsData center
The invention relates to a method and a device for realizing data center resource load balance. The method of the technical scheme comprises the following steps of: acquiring the current utilization rates of attributes of each physical machine in a scheduling domain, and determining the physical machine for a currently allocated task according to the principle of fair distribution in proportion to the allocation capability of a server, an actual allocated task weight value and an expected task weight value, wherein the attributes comprise a central processing unit (CPU) load, a memory load and a network load; determining a mean load value of the attributes of the scheduling domain according to the current utilization rates, and calculating a difference between the actual allocated task weight value and expected task weight value of the physical machine according to the mean load value and predicted load values of the attributes of the physical machine; and selecting the physical machine of which the difference between the actual allocated task weight value and the expected task weight value is the smallest for the currently allocated task. The device provided by the invention comprises a selection control module, a calculation processing module and an allocation execution module. By the technical scheme provided by the invention, the problem of physical server load unbalance caused by inconsistency between user need provisions and physical server provisions can be solved.
Owner:田文洪

Multi-queue peak-alternation scheduling model and multi-queue peak-alteration scheduling method based on task classification in cloud computing

The invention discloses a multi-queue peak-alternation scheduling model and a multi-queue peak-alteration scheduling method based on task classification in cloud computing. The multi-queue peak-alternation scheduling model is characterized by comprising a task manager, a local resource manager, a global resource manager and a scheduler. The multi-queue peak-alternation scheduling method comprises the following steps: firstly, according to demand conditions of a task to resources, dividing tasks into a CPU (central processing unit) intensive type, an I/O (input/output) intensive type and a memory intensive type; sequencing the resources according to the CPU, the I/O and the memory load condition, and staggering a resource using peak during task scheduling; scheduling a certain parameter intensive type mask to a resource with relatively light index load, scheduling the CPU intensive type mask to the resource with relatively low CPU utilization rate. According to the multi-queue peak-alternation scheduling model and the multi-queue peak-alternation scheduling method disclosed by the invention, load balancing can be effectively realized, scheduling efficiency is improved and the resource utilization rate is increased.
Owner:GUANGDONG UNIV OF PETROCHEMICAL TECH

Method for storing and processing small log type files in Hadoop distributed file system

The invention relates to the field of an HDFS of a computer, and discloses a method for storing and processing small log type files in a Hadoop distributed file system (HDFS). According to the method, files are combined in a nearby mode according to physical locations, and a Copy-On-Write mechanism is used for optimizing read-write of the small files; specifically, the small log type files are combined in a nearby mode according to a physical path, a client side reads and writes the combined files from a NameNode and Metadata information of indexes of the combined files when reading and writing the small log type files, and then all the small log type file data are read and written from the combined files according to the indexes of the combined files. According to the new processing method of the small log type files, the memory load of the metadata of the small files are transmitted to the client side from the NameNode, and the problem that when the HDFS processes a large number of small files, efficiency is low is effectively solved. The client side caches the metadata of the small files, so that the speed of access to the small files is improved, and a user does not need to send a metadata request to the NameNode when sequentially accessing small files which are adjacent in physical location.
Owner:JIANGSU R & D CENTER FOR INTERNET OF THINGS +2

Iterator register for structured memory

Loading data from a computer memory system is disclosed. A memory system is provided, wherein some or all data stored in the memory system is organized as one or more pointer-linked data structures. One or more iterator registers are provided. A first pointer chain is loaded, having two or more pointers leading to a first element of a selected pointer-linked data structure to a selected iterator register. A second pointer chain is loaded, having two or more pointers leading to a second element of the selected pointer-linked data structure to the selected iterator register. The loading of the second pointer chain reuses portions of the first pointer chain that are common with the second pointer chain.Modifying data stored in a computer memory system is disclosed. A memory system is provided. One or more iterator registers are provided, wherein the iterator registers each include two or more pointer fields for storing two or more pointers that form a pointer chain leading to a data element. A local state associated with a selected iterator register is generated by performing one or more register operations relating to the selected iterator register and involving pointers in the pointer fields of the selected iterator register. A pointer-linked data structure is updated in the memory system according to the local state.
Owner:INTEL CORP

Virtualization-based method and device for adjusting QoS (quality of service) of node memory of NUMA (non uniform memory access architecture)

The invention discloses a virtualization-based method and device for adjusting QoS (quality of service) of node memory of NUMA (non uniform memory access architecture). The method includes: by means of acquiring an occupancy state of memory resources, predicting a memory required by every virtual machine in the future according to a certain rule so as to obtain the memory required for the next moment, and determining whether to adjust balance of memory load or not to guarantee the QoS of the memory; in the condition of insufficient memory resource, starting a memory balance adjusting operation, sensing the NUMA node according to a memory quotient proportion of every virtual machine so as to decide from which virtual machines the memory is reclaimed and to which virtual machines the memory is allocated, computing the sizes of reclaimable memory and the allocatable memories, and sending a given optimal memory value of an operation system of every client down to an actual adjusting part. The technical problems that during operation of the virtual machine, the virtual machine cannot sense a memory usage state of the node where the virtual machine currently locates and QoS of the memory cannot be adjusted from an angle of the system and the like are solved.
Owner:ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products