Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

229 results about "Virtual queue" patented technology

Virtual queuing is a concept used in inbound call centers. Call centers use an Automatic Call Distributor (ACD) to distribute incoming calls to specific resources (agents) in the center. ACDs hold queued calls in First In, First Out order until agents become available. From the caller’s perspective, without virtual queuing they have only two choices: wait until an agent resource becomes available, or abandon (hang up) and try again later. From the call center’s perspective, a long queue results in many abandoned calls, repeat attempts, and customer dissatisfaction.

Queue Management System and Method

The invention provides a queue management system and method for controlling the movement of a group of one or more people through a virtual queue line for a service. The system comprises registration means (50) for registering the group, the registration means comprising an information carrier (52) bearing a registration code and at least one ID tag (54) including ID details for the member(s) of the group. The registration means associates the registration code with an indication of group size and uniquely with the ID details. The system further comprises interface means (48) for enabling communications to and from the group, and a processor (32, 34) associated with the interface means and responsive to a communication from the group including a communicator address and the registration code for generating a registration record for the group representing the group size, the ID details and the communicator address. The processor is arranged to receive a communication from the group requesting access to the virtual queue and to monitor the place of the group in the queue line and then trigger a summons signal when the group approaches or reaches the head of the queue line. The interface means is responsive to the summons signal for initiating a communication to the communicator address for summoning the group to the service. Access control apparatus (22) at the service reads the at least one ID tag and compares the ID details with the registration record in order to evaluate whether access to the service should be permitted or prevented.
Owner:ACCESSO TECH GRP

Method and system for weighted fair flow control in an asynchronous metro packet transport ring network

InactiveUS7061861B1Large amount of available bandwidthExcessive buffering delayError preventionFrequency-division multiplex detailsQuality of serviceRing network
A method and system for implementing weighted fair flow control on a metropolitan area network. Weighted fair flow control is implemented using a plurality of metro packet switches (MPS), each including a respective plurality of virtual queues and a respective plurality of per flow queues. Each MPS accepts data from a respective plurality of local input flows. Each local input flow has a respective quality of service (QoS) associated therewith. The data of the local input flows are queued using the per flow queues, with each input flow having its respective per flow queue. Each virtual queue maintains a track of the flow rate of its respective local input flow. Data is transmitted from the local input flows of each MPS across a communications channel of the network and the bandwidth of the communications channel is allocated in accordance with the QoS of each local input flow. The QoS is used to determine the rate of transmission of the local input flow from the per flow queue to the communications channel. This implements an efficient weighted bandwidth utilization of the communications channel. Among the plurality of MPS, bandwidth of the communications channel is allocated by throttling the rate at which data is transmitted from an upstream MPS with respect to the rate at which data is transmitted from a downstream MPS, thereby implementing a weighted fair bandwidth utilization of the communications channel.
Owner:ARRIS ENTERPRISES LLC

A shared resource scheduling method and system for distributed parallel processing

InactiveCN102298539ASolve the access contention problemAvoid deadlockProgram initiation/switchingParallel processingShared resource
The invention discloses a shared resource scheduling method and system used in distributed parallel processing. The method and system are based on a distributed operation mechanism. The shared resource scheduling units distributed in each processor subsystem are distributed in each shared Resource locks and resource request arbitration units are implemented. These distributed processing units communicate by sending messages (resource access requests/permissions) to each other through the switching unit. The shared resource scheduling unit in the processor subsystem uses virtual queue technology to manage all resource access requests in the data cache, that is, a special queue is specially opened for each accessible shared resource. Resource locks in shared resources are used to ensure the uniqueness of access to shared resources at any time. Resource locks have two states: lock occupation and lock release. The request arbitration unit in the shared resource uses a priority-based fair polling algorithm to arbitrate resource access requests from different processing nodes. The invention can effectively avoid the competition problem when each processing node accesses the shared resource, can also avoid the deadlock of the shared resource and the starvation problem of the processing node, and provides high-efficiency mutually exclusive access to the shared resource.
Owner:EAST CHINA NORMAL UNIV

Distributed Joint Admission Control And Dynamic Resource Allocation In Stream Processing Networks

Methods and apparatus operating in a stream processing network perform load shedding and dynamic resource allocation so as to meet a pre-determined utility criterion. Load shedding is envisioned as an admission control problem encompassing source nodes admitting workflows into the stream processing network. A primal-dual approach is used to decompose the admission control and resource allocation problems. The admission control operates as a push-and-pull process with sources pushing workflows into the stream processing network and sinks pulling processed workflows from the network. A virtual queue is maintained at each node to account for both queue backlogs and credits from sinks. Nodes of the stream processing network maintain shadow prices for each of the workflows and share congestion information with neighbor nodes. At each node, resources are devoted to the workflow with the maximum product of downstream pressure and processing rate, where the downstream pressure is defined as the backlog difference between neighbor nodes. The primal-dual controller iteratively adjusts the admission rates and resource allocation using local congestion feedback. The iterative controlling procedure further uses an interior-point method to improve the speed of convergence towards optimal admission and allocation decisions.
Owner:IBM CORP

Queue remote management system and method

A queue remote management system and a related method are provided, allowing a user to remotely book, receive and manage a ticket for participating to a virtual queue, and finally to receive the call for approaching the counter and the corresponding physical queue, minimizing the time of physical presence at the counter before being provided of the expected service, with integration between conventional ticket machines, operating at different counters and at different entity locations, and a wireless virtual ticket system operated by any user through his personal client terminal, the system comprising: at least one queue management server (100) wherein a plurality of virtual queues are stored and managed, the virtual queues concerning a plurality of counters (102) of one or more entities, each counter (102) being connected to the queue management server (100) to exchange a predetermined information set concerning the call of queue tickets; a plurality of ticket machines (101) physically associated to said counters (102) and issuing physical queue tickets, each ticket machine (101) being connected to the queue management server (100) to exchange a predetermined information set concerning the issue of said physical queue tickets; a plurality of client terminals (103), allowing a user to obtain a virtual queue ticket related to one or more of said counters (102), each client terminal (103) comprising an identification code or an identification certificate allowing to establish a connection to the queue management server (100) to exchange a predetermined information set concerning the status of a user's queue based on both the physical and the virtual tickets, the counters (102), the ticket machines (101) and the client terminal (103) being connected to the at least one queue management server (100) through a communication network, wherein said ticket machines (101), said counters (102) and the at least one queue management server (100) communicate through a corresponding VPN tunnel (104) or through an encrypted web service established in said communication network, the ticket machines (101) and the counters (102) being linked through a dedicated IP address or domain.
Owner:QURAMI

Implementing method for optimizing network performance of virtual machine by using multiqueue technology

ActiveCN102591715AImprove network transmission performanceReliable optimization methodMultiprogramming arrangementsSoftware simulation/interpretation/emulationSystem callData traffic
The invention relates to an implementing method for optimizing the network performance of a virtual machine by using a multiqueue technology, which comprises the following three steps of: a first step of modifying a network initialization part of QEMU and increasing support for multiqueue by modifying the QEMU; a step 2 of carrying out modification on the vhost multiqueue for supporting the QEMU to use the multiqueue and supporting a vhost-net multiqueue network card, which comprises modification of using one thread to carry out data transmission for one queue and modification of system call; and a step 3 of modifying the part, i.e. the vhost-net multiqueue network card, in a vhost module, which is related to the network, so that the virtual network card supports the multiqueue transmission. According to the invention, by designing and implementing a plurality of virtual queues from the virtual machine to a host, the aim of increasing the network data traffic and the throughput of the virtual machine is fulfilled. The method has ingenious, scientific and reasonable design and has high using value and wide application prospect in the technical field of computers.
Owner:中科育成(北京)科技服务有限公司

Client service reservation method

The invention relates to a reservation method, in particular to a client service reservation method for service units, such as banks, hospitals, governments and the like, which comprises the following steps: establishing a field queue and a reservation queue; inserting reservation requests in the reservation queue into the field queue to form a virtual queue according to reservation time; regulating positions, in the virtual queue, of each reservation request in the reservation queue according to the processing speed of the virtual queue; enabling the reservation requests in the reservation queue to enter the field queue through the on-site confirmation of clients to form an actual queue; and extracting the actual queue in order. In the reservation method, the client reserves service transaction time before going to the bank for transacting a service through a plurality of channels and modes of mobile phone short message, the Internet, telephone voices and the like; and at the service transaction time, the client prints a queuing number on the spot and can transact the service with the number. Therefore, the method has the advantages of not only avoiding the client waiting for too long, but also avoiding the condition that the number of the client is missed when the client arrives.
Owner:深圳市华信智能科技股份有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products