Method for parallelly processing data in extensible manner for network I/O (input/output) virtualization

A technology for expanding data and processing methods, which is applied in the fields of electrical digital data processing, software simulation/interpretation/simulation, program control design, etc. Handle computing resource bottlenecks, packet loss, and performance degradation to achieve dynamic and efficient scaling, parallelization and scalability, and improve efficiency

Active Publication Date: 2016-08-17
SHANGHAI JIAO TONG UNIV
View PDF3 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

When the single-threaded data processing method overhead exceeds the computing power of the physical CPU, the throughput performance of virtualized I / O resources is limited by the computing resource bottleneck of data processing
On the other hand, SR-IOV is assisted by hardware to complete Layer 2 data exchange, and the bottleneck of computing resources has been greatly improved. However, similar computing resource bottlenecks appear in the VF driver in the virtual machine.
The instantaneous network burst of a single virtual machine will cause a large amount of packet loss and performance degradation, and the overall I / O throughput performance will also drop significantly in the case of alternate throughput bursts of multiple virtual machines
In short, the data processing method of high-performance network I / O virtualization has high computing resource overhead, which makes it difficult to fully utilize the physical wire-speed performance of high-performance NICs to form virtualized I / O resources.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for parallelly processing data in extensible manner for network I/O (input/output) virtualization
  • Method for parallelly processing data in extensible manner for network I/O (input/output) virtualization

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0020] The embodiments of the present invention will be described in detail below in conjunction with the accompanying drawings. This embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and specific operation process are given, but the application scenario is not limited to this embodiment.

[0021] In this embodiment, I / O para-virtualization is taken as an example, the data packet protocol analysis driven by the network card is symmetrically parallelized, and the software switch in the VMM is pipelined asymmetrically parallelized.

[0022] The present invention proposes a parallel and scalable data processing method for network I / O virtualization, which includes the following steps:

[0023] Step 1: For I / O paravirtualization, such as figure 1 As shown, the data packet protocol analysis driven by the network card and the data processing flow involved in the software switch in the VMM are decomposed, includ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a method for parallelly processing data in an extensible manner for network I/O (input/output) virtualization. The method has the advantages that data receiving and transmitting, protocol analyzing, data stream managing and upper-level application are regarded as network I/O links on the basis of hierarchical structures of networks, and parallel cooperative work is carried out in the various links in an asymmetric assembly line manner; sufficient computational resources are injected into computational resource bottleneck points in a symmetric parallel mode, and accordingly the data processing capacity is parallelized and is extensible; multi-core resources are flexibly managed according to system load, and accordingly performance bottleneck of I/O virtual resources due to the computational capacity of the traditional I/O driving methods can be eliminated.

Description

Technical field [0001] The invention relates to the field of network I / O virtualization data processing, and in particular to a parallel and scalable data processing method for network I / O virtualization. Background technique [0002] I / O virtualization is a software technology, which is an abstraction of network equipment between the underlying hardware and the workload. At present, there are three solutions for I / O virtualization: full virtualization, paravirtualization and hardware-assisted virtualization. Full virtualization is a technology that simulates hardware network I / O completely through software programs. This method is inefficient and cannot make full use of the physical line speed of 10GbE networks, and it is no longer capable of meeting the virtualization requirements of current high-performance network NIC devices. Currently, high-performance network interface devices (NICs) above 10GbE provide hardware-assisted single-root I / O virtualization (SR-IOV) support and...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/455G06F13/20
CPCG06F9/45558G06F13/20
Inventor 管海兵胡小康李宗垚马汝辉李健
Owner SHANGHAI JIAO TONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products