Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Flow State Aware QoS Management Without User Signalling

a flow state and user signal technology, applied in the field of communication network and a method of operating a communication network, can solve the problems of unsuitable video flow, adverse effects on all users, and reduce the sending rate of senders, so as to increase the quality of service offered

Inactive Publication Date: 2010-06-03
RAZOOM
View PDF3 Cites 39 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

[0084]By operating a packet subnet to introduce, into a stored set of communication identifiers, on or before the commencement of a new communication, a communication identifier which enables the identification of packets belonging to the new packet communication, and discriminating against packets containing a communication identifier belonging to said set when forwarding packets during a period of congestion, a packet subnet operator is able to concentrate the adverse effects of that congestion of selected communications. By additionally removing communication identifiers from said set prior to the cessation of the associated communication, communications that have been in existence for a period of time are treated preferentially to communications that have been in existence for a shorter period of time. This has the advantage that quality of service afforded to a communication increases as the age of the communication increases. This in turn has the advantage of being less annoying to users receiving communications than the random nature of packet discard applied in conventional congestion alleviation mechanisms which might result in a communication a user has been receiving for some time being degraded whilst a newly started communication is allowed to continue.
[0086]In preferred embodiments, said predetermined condition comprises the addition of an identifier to said set of stored identifiers. In this way, the age of a communication relative to other flows determines how packets of the flow are treated on the onset of congestion.
[0091]In this way, it is ensured that sufficient traffic is represented by the communications identified by communication identifiers in said set of flows vulnerable to discard to allow said node to alleviate said congestion.
[0094]In preferred embodiments, said method further comprises, on a high level of congestion being reached in said subnet, reading said communication identifier from any one packet or any pre-determined number of packets corresponding to different flows, received at a network node; and adding said communication identifier, or set of identifiers, to said set of flows vulnerable to discard. This provides a mechanism for increasing the number of packets discarded on the advent of a higher level of congestion, and thereby reacting more strongly to higher levels of congestion. Furthermore, by reading a communication identifier from packet(s) received at the subnet at a given time, the probability of selecting a communication which is contributing to the higher level of congestion is increased.
[0096]This has the advantage of providing another gradation in the increase of quality of service offered to a communication with the age of the communication.

Problems solved by technology

Although such schemes prevent congestion, QoS management must also manage so-called “elastic” traffic where there is potentially a need for a minimum guaranteed rate but frequently a desire to transmit the flow as fast as possible, subject to network congestion constraints and constraints on maximum sending rates.
These schemes allow users access to communications resources but attempt to cause senders to decrease their sending rate on the onset of congestion.
This is unsuitable for video flows however, since real-time video servers cannot reduce their sending rate.
In most flow control schemes, all users are adversely affected by the onset of congestion.
The current deployment of flow base control, however, is limited only at the edge of the network.
However, flow level traffic control only at the edge cannot guarantee the flow level QoS.
On the other hand, having scalable control architecture for flow level traffic control along the data path is a challenging issue, because the number of flows in a network is huge.
However, the two approaches have outstanding issues.
As mentioned earlier, FAN is not designed for supporting various QoS requirement of the service.
This may stabilize the transport network in general, but the network provider cannot generate additional profit, because FAN cannot support the service that has special QoS treatment.
Enabling the signalling function in the terminal may cause the security hole in the network.
Second, this proposal focuses flow-based control in the access network, not the core.
In the core, the number of flows is high and call by call flow level control in RACF is difficult to achieve.
However, none of the above proposals provides a method of managing contention in a packet network which allows flow-based QoS mechanisms to offer:

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Flow State Aware QoS Management Without User Signalling
  • Flow State Aware QoS Management Without User Signalling
  • Flow State Aware QoS Management Without User Signalling

Examples

Experimental program
Comparison scheme
Effect test

first embodiment

[0142]FIG. 4 relates to the first preferred embodiment and shows an expansion of the function 6, containing sub-function 6.1 to 6.4. The size of buffer 6.1 is based on the following considerations. Firstly, as all flows are being policed at input 6.2 against their individual capacity allocations, it is possible to arrange the operation to be such that the sum of all capacity allocations is never larger than the output link capacity. In such a case, only the simultaneous and independent forwarding of packets from two or more input links to the same output link would cause the need for buffering at the output. But two other conditions are considered within the scope of this present first embodiment of the invention:[0143]Sudden surges of traffic on to a specific output link, for example due to traffic being re-routed following a link failure. This may happen, for example, in some applications where alternative paths are established between the content source and a group of receiving e...

third embodiment

[0184]In another embodiment, this delay interval may be a pre-determined fixed short interval. In a third embodiment, every packet is automatically delayed for a pre-determined fixed short interval, but function 6.5.4 is informed only of those that are to be examined for possible deletion as described next.

[0185]If the Control Function 6.5.4 detects there is already a delayed packet for the same aggregate identity (e.g. there is already a delayed packet waiting to go to the same given end-user), then function 6.5.4 proceeds as follows:[0186]If any delayed packet of this same aggregate identity belongs to a flow with flow status “vulnerable to discard”, function 6.5.4 instructs function 6.5.2 to delete that packet[0187]If there are no such packets that can be deleted because of the “vulnerable to discard” status, the lowest preference priority packet is deleted from the Delay / Deletion function 6.5.2[0188]The state of the flow id of this deleted packet is changed to “vulnerable to di...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

Conventional packet network nodes react to congestion in the packet network by dropping packets in a manner which is perceived by users to be indiscriminate. In embodiments of the present invention, indiscriminate packet discards are prevented by causing packets to be discarded according to bandwidth allocations that intelligently track flow sending rates. Flows are allocated bandwidth based on policy information. Where such policy information indicates that the flow should be treated as delay-sensitive, the present invention includes means to allocate an initial minimum rate that will be guaranteed and such flows will also have the use of an additional capacity that varies depending on the number of such flows that currently share an available pool of capacity. This provides a congestion alleviation method which is less annoying to users since communications that have been in existence for longer are less susceptible to component packets being deleted.

Description

CROSS REFERENCE TO RELATED APPLICATIONS[0001]This application claims the benefit of priority to U.S. Provisional Application No. 61 / 118,964, filed on Dec. 1, 2008, titled “FLOW STATE AWARE QoS MANAGEMENT WITHOUT USER SIGNALLING” which is herein incorporated by reference in its entirety.DESCRIPTION OF THE INVENTION[0002]1. Field of the Invention[0003]The present invention relates to a communications network and a method of operating a communications network.[0004]2. Description of Background Art[0005]Recently, the demand for streaming video to a computer via the Internet has grown strongly. This has led to a need to supply increasing amounts of video material over local communication networks (whether the copper pairs used by telephone network operators or the coaxial cables used by cable television network operators).[0006]In telephony networks this additional demand is being met by the introduction of Digital Subscriber Loop (DSL) technology. As its name suggests, this technology c...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(United States)
IPC IPC(8): H04L12/56
CPCH04L12/5695H04L47/10H04L47/12H04L47/20H04L47/2433Y02B60/33H04L47/32H04L47/41H04L47/805H04L47/828H04L47/2483H04L47/70Y02D30/50
Inventor ADAMS, JOHN LEONARD
Owner RAZOOM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products