Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Message preprocessing method and device

A preprocessing device and preprocessing technology, applied in the field of data processing, can solve the problems of CPU resource occupation, poor availability, and low overall performance of network equipment, and achieve the effect of high overall performance and avoidance of occupation.

Active Publication Date: 2014-08-13
NEW H3C TECH CO LTD
View PDF3 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In the related technology, it is proposed to preprocess the message by software, but it will lead to a large occupation of CPU resources, resulting in extremely low overall performance and poor availability of network equipment.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Message preprocessing method and device
  • Message preprocessing method and device
  • Message preprocessing method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0037] As an exemplary embodiment, the PCI-E DMA channel corresponding to the PCI-E port can be bridged with the network port DMA channel corresponding to the NA engine, so that the message from the forwarding chip is transmitted to the NA engine, and implemented by the NA engine Preprocessing of packets.

[0038] specifically, Figure 4 It shows a schematic diagram of preprocessing the incoming packets of the PCI-E port through the network accelerator engine according to an embodiment of the present invention.

[0039] Such as Figure 4 As shown, the process of preprocessing the incoming message of the PCI-E port by the network accelerator engine according to an embodiment of the present invention includes:

[0040] (1) Chip configuration during initialization. Specifically, it includes: the DMA driver configures a corresponding physical memory address for the PCI-E DMA, and the network port driver configures a corresponding physical memory address for the network port DMA...

Embodiment 2

[0073] As another exemplary embodiment, after the message is transferred to the memory through the PCI-E DMA channel, it can be added to the queue of the network port corresponding to the NA engine, such as image 3 The GMAC FIFO queue shown, so that after the message is resent through the network port, the NA engine realizes the preprocessing of the message.

[0074] specifically, Figure 6 It shows a schematic diagram of preprocessing the incoming packets of the PCI-E port through the network accelerator engine according to another embodiment of the present invention.

[0075] Such as Figure 6 As shown, according to another embodiment of the present invention, the process of preprocessing the incoming message of the PCI-E port by the network accelerator engine includes:

[0076] (1) Chip configuration during initialization. Specifically, it includes: 1) The DMA driver configures the corresponding physical memory address for the PCI-E DMA, and the network port driver conf...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a message preprocessing method and device which are applied to a service board provided with a network accelerator engine. The method includes the steps that a first DMA channel corresponding to a PCI-E port is established, messages from a forwarding chip of the service board are written into a memory; a notification message is generated by a DMA driver corresponding to the first DMA channel, and the notification message is sent to the network accelerator engine, wherein the notification message contains attribute information of the messages; the messages are preprocessed by the network accelerator engine according to the notification message. According to the technical scheme, the messages from the forwarding chip are preprocessed through the network accelerator engine, and thus occupation of CPU resources is avoided, and promotion of overall performance of network equipment is facilitated.

Description

technical field [0001] The invention relates to the technical field of data processing, in particular to a message processing method and a message processing device. Background technique [0002] For existing network equipment such as figure 1 As shown, the usual hardware design is that the PCI-E port is used as a local CPU channel. However, because the PCIE port has a relatively simple function of receiving packets through DMA, it can only realize the function of receiving packets by queue, and cannot achieve load balancing through preprocessing such as parsing, classifying, and distributing packets. [0003] In related technologies, it is proposed to preprocess packets by means of software, but this will lead to a large occupation of CPU resources, resulting in extremely low overall performance and poor availability of network devices. [0004] Therefore, how to implement effective preprocessing of incoming packets from the PCI-E port and avoid consuming a large amount o...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04L12/02
Inventor 钟晋明
Owner NEW H3C TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products