Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

68 results about "Packet prioritization" patented technology

Tiered contention multiple access (TCMA): a method for priority-based shared channel access

Quality of Service (QoS) support is provided by means of a Tiered Contention Multiple Access (TCMA) distributed medium access protocol that schedules transmission of different types of traffic based on their service quality specifications. In one embodiment, a wireless station is supplied with data from a source having a lower QoS priority QoS(A), such as file transfer data. Another wireless station is supplied with data from a source having a higher QoS priority QoS(B), such as voice and video data. Each wireless station can determine the urgency class of its pending packets according to a scheduling algorithm. For example file transfer data is assigned lower urgency class and voice and video data is assigned higher urgency class. There are several urgency classes which indicate the desired ordering. Pending packets in a given urgency class are transmitted before transmitting packets of a lower urgency class by relying on class-differentiated urgency arbitration times (UATs), which are the idle time intervals required before the random backoff counter is decreased. In another embodiment packets are reclassified in real time with a scheduling algorithm that adjusts the class assigned to packets based on observed performance parameters and according to negotiated QoS-based requirements. Further, for packets assigned the same arbitration time, additional differentiation into more urgency classes is achieved in terms of the contention resolution mechanism employed, thus yielding hybrid packet prioritization methods. An Enhanced DCF Parameter Set is contained in a control packet sent by the AP to the associated stations, which contains class differentiated parameter values necessary to support the TCMA. These parameters can be changed based on different algorithms to support call admission and flow control functions and to meet the requirements of service level agreements.
Owner:AT&T INTPROP I L P

Tiered contention multiple access (TCMA): a method for priority-based shared channel access

Quality of Service (QoS) support is provided by means of a Tiered Contention Multiple Access (TCMA) distributed medium access protocol that schedules transmission of different types of traffic based on their service quality specifications. In one embodiment, a wireless station is supplied with data from a source having a lower QoS priority QoS(A), such as file transfer data. Another wireless station is supplied with data from a source having a higher QoS priority QoS(B), such as voice and video data. Each wireless station can determine the urgency class of its pending packets according to a scheduling algorithm. For example file transfer data is assigned lower urgency class and voice and video data is assigned higher urgency class. There are several urgency classes which indicate the desired ordering. Pending packets in a given urgency class are transmitted before transmitting packets of a lower urgency class by relying on class-differentiated urgency arbitration times (UATs), which are the idle time intervals required before the random backoff counter is decreased. In another embodiment packets are reclassified in real time with a scheduling algorithm that adjusts the class assigned to packets based on observed performance parameters and according to negotiated QoS-based requirements. Further, for packets assigned the same arbitration time, additional differentiation into more urgency classes is achieved in terms of the contention resolution mechanism employed, thus yielding hybrid packet prioritization methods. An Enhanced DCF Parameter Set is contained in a control packet sent by the AP to the associated stations, which contains class differentiated parameter values necessary to support the TCMA. These parameters can be changed based on different algorithms to support call admission and flow control functions and to meet the requirements of service level agreements.
Owner:AT&T INTPROP II L P

Tiered contention multiple access(TCMA): a method for priority-based shared channel access

Quality of Service (QoS) support is provided by means of a Tiered Contention Multiple Access (TCMA) distributed medium access protocol that schedules transmission of different types of traffic based on their service quality specifications. In one embodiment, a wireless station is supplied with data from a source having a lower QoS priority QoS(A), such as file transfer data. Another wireless station is supplied with data from a source having a higher QoS priority QoS(B), such as voice and video data. Each wireless station can determine the urgency class of its pending packets according to a scheduling algorithm. For example file transfer data is assigned lower urgency class and voice and video data is assigned higher urgency class. There are several urgency classes which indicate the desired ordering. Pending packets in a given urgency class are transmitted before transmitting packets of a lower urgency class by relying on class-differentiated urgency arbitration times (UATs), which are the idle time intervals required before the random backoff counter is decreased. In another embodiment packets are reclassified in real time with a scheduling algorithm that adjusts the class assigned to packets based on observed performance parameters and according to negotiated QoS-based requirements. Further, for packets assigned the same arbitration time, additional differentiation into more urgency classes is achieved in terms of the contention resolution mechanism employed, thus yielding hybrid packet prioritization methods. An Enhanced DCF Parameter Set is contained in a control packet sent by the AP to the associated stations, which contains class differentiated parameter values necessary to support the TCMA. These parameters can be changed based on different algorithms to support call admission and flow control functions and to meet the requirements of service level agreements.
Owner:AT&T INTPROP II L P

Packet prioritization and associated bandwidth and buffer management techniques for audio over IP

The present invention is directed to voice communication devices in which an audio stream is divided into a sequence of individual packets, each of which is routed via pathways that can vary depending on the availability of network resources. All embodiments of the invention rely on an acoustic prioritization agent that assigns a priority value to the packets. The priority value is based on factors such as whether the packet contains voice activity and the degree of acoustic similarity between this packet and adjacent packets in the sequence. A confidence level, associated with the priority value, may also be assigned. In one embodiment, network congestion is reduced by deliberately failing to transmit packets that are judged to be acoustically similar to adjacent packets; the expectation is that, under these circumstances, traditional packet loss concealment algorithms in the receiving device will construct an acceptably accurate replica of the missing packet. In another embodiment, the receiving device can reduce the number of packets stored in its jitter buffer, and therefore the latency of the speech signal, by selectively deleting one or more packets within sustained silences or non-varying speech events. In both embodiments, the ability of the system to drop appropriate packets may be enhanced by taking into account the confidence levels associated with the priority assessments.
Owner:AVAYA INC

Dynamic weighted round robin scheduling policy method based on priorities

The invention provides a dynamic weighted round robin scheduling policy method based on priorities. Through utilization of the method, a network resource utilization rate is high, bandwidth resource allocation is fair and network resource scheduling is reasonable and efficient. The method is realized through adoption of the following technical scheme that for data streams of different priorities,the dynamic weighted round robin scheduling policy method based on the priorities comprises a queue management module and a round robin scheduling module; the queue management module divides all business in a network into sub-queues of n priorities; after backbone network nodes receive business data packets, the priorities of the data packets arriving at the backbone network nodes are judged according to QoS demands of the business, and the business data packets are inserted into the corresponding cache sub-queues according to the priorities; the round robin scheduling module carries out periodic round robin; and the busy degree of each sub-queue is calculated according to the current queue lengths Q of the sub-queues, the busy degree is arranged, round robin weight values of the sub-queues are dynamically adjusted according to a busy degree arrangement result, the round robin is carried out on each sub-queue in sequence, and the data packets are sent.
Owner:10TH RES INST OF CETC

Packet prioritization and associated bandwidth and buffer management techniques for audio over IP

The present invention is directed to voice communication devices in which an audio stream is divided into a sequence of individual packets, each of which is routed via pathways that can vary depending on the availability of network resources. All embodiments of the invention rely on an acoustic prioritization agent that assigns a priority value to the packets. The priority value is based on factors such as whether the packet contains voice activity and the degree of acoustic similarity between this packet and adjacent packets in the sequence. A confidence level, associated with the priority value, may also be assigned. In one embodiment, network congestion is reduced by deliberately failing to transmit packets that are judged to be acoustically similar to adjacent packets; the expectation is that, under these circumstances, traditional packet loss concealment algorithms in the receiving device will construct an acceptably accurate replica of the missing packet. In another embodiment, the receiving device can reduce the number of packets stored in its jitter buffer, and therefore the latency of the speech signal, by selectively deleting one or more packets within sustained silences or non-varying speech events. In both embodiments, the ability of the system to drop appropriate packets may be enhanced by taking into account the confidence levels associated with the priority assessments.
Owner:AVAYA INC

Packet prioritization and associated bandwidth and buffer management techniques for audio over IP

The present invention is directed to voice communication devices in which an audio stream is divided into a sequence of individual packets, each of which is routed via pathways that can vary depending on the availability of network resources. All embodiments of the invention rely on an acoustic prioritization agent that assigns a priority value to the packets. The priority value is based on factors such as whether the packet contains voice activity and the degree of acoustic similarity between this packet and adjacent packets in the sequence. A confidence level, associated with the priority value, may also be assigned. In one embodiment, network congestion is reduced by deliberately failing to transmit packets that are judged to be acoustically similar to adjacent packets; the expectation is that, under these circumstances, traditional packet loss concealment algorithms in the receiving device will construct an acceptably accurate replica of the missing packet. In another embodiment, the receiving device can reduce the number of packets stored in its jitter buffer, and therefore the latency of the speech signal, by selectively deleting one or more packets within sustained silences or non-varying speech events. In both embodiments, the ability of the system to drop appropriate packets may be enhanced by taking into account the confidence levels associated with the priority assessments.
Owner:AVAYA INC

Packet prioritization and associated bandwidth and buffer management techniques for audio over IP

The present invention is directed to voice communication devices in which an audio stream is divided into a sequence of individual packets, each of which is routed via pathways that can vary depending on the availability of network resources. All embodiments of the invention rely on an acoustic prioritization agent that assigns a priority value to the packets. The priority value is based on factors such as whether the packet contains voice activity and the degree of acoustic similarity between this packet and adjacent packets in the sequence. A confidence level, associated with the priority value, may also be assigned. In one embodiment, network congestion is reduced by deliberately failing to transmit packets that are judged to be acoustically similar to adjacent packets; the expectation is that, under these circumstances, traditional packet loss concealment algorithms in the receiving device will construct an acceptably accurate replica of the missing packet. In another embodiment, the receiving device can reduce the number of packets stored in its jitter buffer, and therefore the latency of the speech signal, by selectively deleting one or more packets within sustained silences or non-varying speech events. In both embodiments, the ability of the system to drop appropriate packets may be enhanced by taking into account the confidence levels associated with the priority assessments.
Owner:AVAYA INC

Scheduling method for guaranteeing real-time transmission of wireless sensor network information

The invention relates to a scheduling method for guaranteeing real-time transmission of wireless sensor network information. The scheduling method comprises the following steps: 1) prioritizing data received by sensor nodes according to a wireless sensor network application environment and a monitoring object; 2) prioritizing buffer zone queues of wireless sensor nodes with routing functions according to the prioritization; 3) configuring corresponding parameters of an L-RQS (LCFS-based Real-time Queue Scheduling) algorithm and determining the initial values of the parameters; 4) building a wireless sensor network and initializing the network for enabling sensors to normally work; 5) when the sensor nodes receive data packets, performing a corresponding operation according to a state of a current queue and the priority of the data packets by a buffer zone management algorithm in L-RQS; 6) selecting a corresponding data packet for scheduling and setting a state of a high-priority queue according to the number of continuously transmitted high-priority data packets or waiting time by a queue scheduling algorithm in L-RQS; 7) when finishing the scheduling of the corresponding data packet at a time, selecting the state according to the number of the data packets in the queue by a scheduler. The scheduling method is used in the field of an application of the wireless sensor network with higher real-time requirement.
Owner:NORTH CHINA INST OF SCI & TECH

Data transmission method and device

The invention relates to a data transmission method and a data transmission device, which are used for realizing data transmission through the combination of a resource scheduling algorithm based on remaining time and a time slot resource allocation algorithm based on time slot reservation, better supporting the prior transmission of high-priority data during time slot resource steal and time slot resource collision in congestion scenario by matching a time slot priority and a data packet priority as much as possible, and better ensuring the fairness when each node accesses a channel. The data transmission method comprises the steps of: maintaining transmission remaining time of a data packet according to transmission time delay information corresponding to the data packet when the data packet arrives a transmission cache queue; and determining a data packet to be transmitted currently and transmitting the data packet through currently arrived transmission resources according to the transmission remaining time of the data packet in the transmission cache queue, a corresponding priority and a resource priority of the transmission resource available for the nodes when service time of the transmission resources used for transmitting the data packet is up.
Owner:DATANG GOHIGH INTELLIGENT & CONNECTED TECH (CHONGQING) CO LTD

High real-time spacecraft data transmission method based on priorities

The invention discloses a high real-time spacecraft data transmission method based on priorities. The method comprises the first step of partitioning data assigned to be sent into a plurality of data packages and continuously allocating serial numbers to the data packages according to the successive sequence of data organization, the second step of setting the priorities for all data packages and sending the data packages according to the sequence from a high priority to a low priority of the data packages, the third step of verifying the integrity and correctness of one data package after a receiver receives the data package, and sending a confirmed data package comprising the serial number of a next data package needing to be received to a sender, the fourth step of sending the next data package after the sender receives the confirmed data package, if the confirmed data package is not received by the sender in the assigned time, the confirmed data package is sent again, and the fifth step of combining all received data packages into integrated data via the receiver according to the successive sequence of the serial numbers after all data packages are sent to the receiver. The high real-time spacecraft data transmission method has the advantages of being high in real-time performance and reliability.
Owner:BEIJING INST OF CONTROL ENG

Method and system for high-concurrency and reduced latency queue processing in networks

A method and a system for controlling a plurality of queues of an input port in a switching or routing system. The method supports the regular request-grant protocol along with speculative transmission requests in an integrated fashion. Each regular scheduling request or speculative transmission request is stored in request order using references to minimize memory usage and operation count. Data packet arrival and speculation event triggers can be processed concurrently to reduce operation count and latency. The method supports data packet priorities using a unified linked list for request storage. A descriptor cache is used to hide linked list processing latency and allow central scheduler response processing with reduced latency. The method further comprises processing a grant of a scheduling request, an acknowledgement of a speculation request or a negative acknowledgement of a speculation request. Grants and speculation responses can be processed concurrently to reduce operation count and latency. A queue controller allows request queues to be dequeued concurrently on central scheduler response arrival. Speculation requests are stored in a speculation request queue to maintain request queue consistency and allow scheduler response error recovery for the central scheduler.
Owner:IBM CORP

Long distance CSMA/CA protocol with QoS assurance

The invention discloses a long distance CSMA / CA (Carrier Sense Multiple Access with Collision Avoidance) protocol with QoS (Quality of Service) assurance, a node in a communication channel executes the following steps: (1) entering a backoff stage after sending of the previous data packet is finished in a sending queue; (2) according to priority conditions of the data packets in the existing sending queue, extracting the data packet with the highest priority and arranging the extracted data packet at the head of the sending queue; (3) calculating a channel busy-to-idle ratio and a priority threshold corresponding to the data packet of the head of the sending queue; (4) after the backoff stage is ended, detecting the busy-to-idle ratio of the current channel, and comparing the priority threshold corresponding to the data packet of the head of the sending queue with the busy-to-idle ratio of the current channel, if the busy-to-idle ratio of the current channel is less than the priority threshold corresponding to the data packet, enabling the data packet to be accessed to the channel and sending the data packet, and returning back to the step (1). According to the long distance CSMA / CA protocol with QoS assurance provided by the invention, performance requirement of the data packet with high priority is satisfied through delay of access of the data packet with low priority, and hereby, a collision probability is reduced and a throughput is improved.
Owner:CHINESE AERONAUTICAL RADIO ELECTRONICS RES INST +1

Method and system for realizing terminal quality of service of internet voice transmission system

The invention discloses a method and a system for realizing the terminal quality of service of an internet voice transmission system. The quality of service link of the system comprises a quality of service control module and a quality of service execution module. The method comprises the following steps that: a user configures parameters and then a CGI module writes the parameters into a configuration file; the quality of service control module generates the configuration file into a flow control script for execution and then writes a quality of service policy into an inner core of the system; and when the inner core of the system receives a data packet, the quality of service execution module configures the data packet. A first-in first-out policy comprising the determined priority and allocated bandwidth is introduced to configure the QoS quality of service, and end-to-end quality of service in an internet voice transmission application system is uniformed, a superior QoS processing mechanism is ensured on a VoIP terminal, the voice quality of the terminal is guaranteed, real-time smoothness of IP phones is realized and the basic requirements of the user are met; and the low cost is realized and the system is easy to develop and maintain.
Owner:ZTE CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products