Method for improving performance of multiprocess programs of application delivery communication platforms

A communication platform and application delivery technology, applied in the field of computer networks, can solve the problems of low CPU cache efficiency and system performance degradation, and achieve the effect of improving the cache hit rate

Inactive Publication Date: 2015-01-14
般固(北京)网络科技有限公司
View PDF2 Cites 17 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0016] In the environment of multi-core processors and multi-queue network cards, although the above method can improve the processing efficiency of the program, it cannot guarant...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for improving performance of multiprocess programs of application delivery communication platforms
  • Method for improving performance of multiprocess programs of application delivery communication platforms
  • Method for improving performance of multiprocess programs of application delivery communication platforms

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the embodiments and accompanying drawings.

[0035] Such as Figure 5 As shown, in order to improve the performance of the multi-process program of the application delivery communication platform, the method includes:

[0036] Step 10 hashes the data packet to the network card queue according to the source IP;

[0037] In order to ensure that the hard interrupts and soft interrupts of the data packets of the same source IP are processed by the same processor core, the default RSS hash algorithm based on the source IP, source port, destination IP, and destination port in the Linux kernel network card driver module is replaced by Source IP address hash algorithm.

[0038] Step 20 binds the data packet in the network card queue to the corresponding CPU core;

[0039] The hard interrupts and s...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a method for improving performance of multiprocess programs of application delivery communication platforms. The method includes hashing data packets to network card queues according to source IPs (internet protocols); binding the data packets in the network card queues to corresponding CPU (central processing unit) cores; binding the data packets received by the CPU cores with corresponding processes to process; respectively creating service programs for the processes, setting the service processes as REUSEPORT options, and binding IPs (internet protocols) and ports; running the corrected service programs, and adjusting the number of queues enabled by multi-queue network cards according to the number of the service processes; enabling each service process to be bond with one CUP core. By the method, hard interruption and soft interruption of the network cards can be balanced, CPU cores used for receiving and sending the data packets are the same, and thus, CPU cache hit ratio is increased.

Description

technical field [0001] The invention relates to computer network technology, in particular to a method for improving the performance of a multi-process program on an application delivery communication platform. Background technique [0002] The CPU cache is a temporary memory located between the CPU and the memory, mainly to solve the contradiction between the CPU operation speed and the memory read and write speed. The CPU cache can be divided into a first-level cache and a second-level cache according to the tightness of the data reading sequence combined with the CPU. Among them, the first-level cache can be divided into data cache and instruction cache, which are used to store data and decode instructions for executing these data in time. [0003] Usually each core of a multi-core processor has a small independent L1 cache, and all cores share a larger L2 cache. The speed at which the program accesses data is as follows: [0004] If the data accessed by the program is...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F9/48G06F9/50
Inventor 高明张广龙彭建章
Owner 般固(北京)网络科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products