Dispatching method of virtual processor based on NUMA high-performance network cache resource affinity

A virtual processor and network caching technology, applied in resource allocation, software simulation/interpretation/simulation, multi-programming devices, etc., can solve the problem of analyzing virtual machine memory resources, VCPU and memory affinity, VCPU data packet The speed is not optimal and other issues, to improve the processing speed and reduce the impact

Active Publication Date: 2014-12-10
SHANGHAI JIAO TONG UNIV
View PDF4 Cites 29 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0010] The existing Xen kernel supports several configuration methods for NUMA, including fully localized memory (NUMA aware placement) and memory configuration to several nodes, and then the virtual processor (VCPU) is scheduled to the assigned node (NUMA aware scheduling ), but this also does not fully analyze the memory resources of the virtual machine from the perspective of memory distribution, so that the affinity between the VCPU and the memory is still not optimal, which will inevitably affect the speed at which the VCPU processes memory
[0011] Therefore, those skilled in the art are committed to developing a virtual processor scheduling method based on NUMA high-performance network cache resource affinity to solve the problem that the rate of VCPU processing network data packets is not optimal

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Dispatching method of virtual processor based on NUMA high-performance network cache resource affinity
  • Dispatching method of virtual processor based on NUMA high-performance network cache resource affinity
  • Dispatching method of virtual processor based on NUMA high-performance network cache resource affinity

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] The embodiments of the present invention are described in detail below in conjunction with the accompanying drawings: this embodiment is implemented on the premise of the technical solution of the present invention, and detailed implementation methods and specific operating procedures are provided, but the protection scope of the present invention is not limited to the following the described embodiment.

[0041] Such as image 3What is shown is a schematic flowchart of a virtual processor scheduling method based on NUMA high-performance network cache resource affinity of the present invention. refer to image 3 , the scheduling method shown includes the following steps:

[0042] Step S1: Under the NUMA architecture, when the network card of the virtual machine is started, obtain the distribution of the network card cache on each NUMA node;

[0043] Step S2: Based on the affinity relationship between each NUMA node, to obtain the affinity of each NUMA node to the net...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a dispatching method of a virtual processor based on NUMA high-performance network cache resource affinity. The dispatching method comprises the steps that under an NUMA framework, when a virtual machine network card is started, distribution of a network card cache on each NUMA node is obtained; based on the affinity relationship among the NUMA nodes, the affinity of each NUMA node towards the network card cache is obtained; a target NUMA node is determined by combining the distribution of the network card cache on each NUMA node and the affinity of each NUMA node towards the network card cache; the virtual processor is dispatched to a CPU of the target NUMA node. The dispatching method of the virtual processor based on NUMA high-performance network cache resource affinity resolves the problem that under the NUMA framework, the affinity of a VCPU of a virtual machine and the network card cache is not optimal, so that the speed of processing network data packages of the virtual machine network card is low.

Description

technical field [0001] The invention relates to the field of computer system virtualization, in particular to a virtual processor scheduling method based on NUMA high-performance network cache resource affinity. Background technique [0002] Virtualization technology usually integrates computing or storage functions that originally required multiple physical devices into a relatively powerful physical server, thereby realizing the integration and redistribution of hardware resources and improving the utilization of hardware devices. Cloud computing and data center construction play a very important role. [0003] A virtual machine monitor refers to a software management layer that exists between the hardware and the traditional operating system. Its main function is to manage real physical devices, such as physical CPUs, memory, etc., and abstract the underlying hardware into corresponding virtual device interfaces. , so that multiple operating systems can get the virtual h...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/455G06F9/50
Inventor 管海兵马汝辉李健贾小龙
Owner SHANGHAI JIAO TONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products