Unlock instant, AI-driven research and patent intelligence for your innovation.

Event distribution method and device applied to multithreaded system

A multi-thread and event technology, applied in the field of communication, can solve problems such as load balancing and resource competition that cannot be achieved at the same time

Active Publication Date: 2016-11-23
ZHEJIANG DAHUA TECH CO LTD
View PDF10 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] The embodiment of the present invention provides an event distribution method and device applied in a multi-threaded system to solve the problems existing in the prior art that load balancing cannot be achieved simultaneously and resource competition can be avoided

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Event distribution method and device applied to multithreaded system
  • Event distribution method and device applied to multithreaded system
  • Event distribution method and device applied to multithreaded system

Examples

Experimental program
Comparison scheme
Effect test

example 1

[0069] Example 1: It is detected that event A is triggered, and it is determined that event A is associated with serialized cache area 1 by querying the identifiers recorded in each serialized cache area. First, query the serialized cache area to which event A belongs Whether there is an allocation mark in 1, the query result is none, and it is determined that the event A is the first event to be triggered in the serialized event group to which it belongs. Then, the event A can be directly assigned to the end of the event queue. If there are no other events in the queue at this time, the event A can be directly called and executed by the corresponding idle thread. Otherwise, it needs to wait in line.

[0070] It should be noted that the serialized buffer area 1 enters the working state when the event A is assigned to the event queue until the event A is executed by the corresponding thread. When the serialization buffer area 1 is in the working state, if other events other tha...

example 2

[0071] Example 2: Based on the execution of the above example 1, it is subsequently detected that event B is triggered, and it is determined that event B is also associated with serialized buffer area 1. Before that, only event A was triggered and has been assigned to the event The queue is waiting to be called and executed by the thread. Therefore, there are no other events cached in the serialized cache area 1. Then, according to the queried allocation mark (after each cached event is allocated to the event queue, it will be in the The allocation mark is set in the corresponding serialization buffer area, indicating that the event has been buffered in this space and has been allocated to the event queue or has been called by the thread), and it is determined that the event is not the string corresponding to the serialization buffer area 1 The first triggered event in the serialization event group; further check whether the serialization buffer area 1 is in the working state, ...

example 3

[0072] Example 3: Based on Example 2, then, it is detected that event C is triggered, and it is determined that event C is also associated with serialized buffer area 1. In fact, the time required for each event to be executed is uncertain. Therefore, in the event After C is triggered, it is necessary to check whether there is an event cached in the serialized cache area 1. If there is an event in the cache, it indicates that an event with a business logic relationship with it is executed at the current moment, and there is an event with a business logic relationship with it Events are buffered in the serialized buffer area 1, and the event C needs to be buffered in the serialized buffer area 1, and waits to be allocated according to the order in which it enters the serialized buffer area 1. For example, if event B is cached in the serialized buffer area 1, after the execution of event A is completed, the serialized buffer area 1 switches to the rest state, and event B can be a...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the technical field of communication, in particular to an event distribution method and device applied to a multithreaded system to solve the problem that load balancing and avoidance of resource competitions cannot be realized simultaneously in the prior art. With the event distribution method and device, when any one event is detected to be triggered, if the event is a correlated event, whether the event is cached or directly distributed to the head or the tail of an event queue is decided according to the fact that whether the event is the first triggered event in a belonged serialized event group as well as the cache state and the working state of a corresponding serialized cache area, and if the event is an independent event, the event is directly distributed to the tail of the event queue to wait for calling by a corresponding thread. Thus, all events can be distributed and scheduled in a balanced manner, the load balance is guaranteed, and the serialized events can be executed in a serialized manner according to the business logic relation, and resource competition is avoided.

Description

technical field [0001] The invention relates to the field of communication technology, in particular to an event distribution method and device applied in a multi-thread system. Background technique [0002] With the development of computer multi-core technology, people are increasingly using multi-thread programming technology to give full play to the performance of the CPU. In an existing multi-threaded system, in order to ensure CPU load balance, services are divided into smaller granular units, for example, they are executed in threads in units of events. [0003] Event allocation strategies in existing multi-threaded systems mainly include: static binding allocation and dynamic binding allocation; where: [0004] Referring to Figure 1(a), it is a schematic diagram of the event allocation principle model corresponding to the static binding allocation method, that is, the events related to the same resource or the same group of resources are bound to the specified thread...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/48G06F9/50
CPCG06F9/4843G06F9/4881G06F9/5083
Inventor 何林强陈超
Owner ZHEJIANG DAHUA TECH CO LTD