Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

52results about How to "Reduce switching overhead" patented technology

Multi-channel multi-path routing protocol for vehicle team ad-hoc networks

The invention discloses a multi-channel multi-path routing protocol for vehicle team ad-hoc networks. The multi-channel multi-path routing protocol mainly comprises: (1) each vehicle node is made to work on a service channel, the vehicle nodes using the same service channel form a channel transmission path in a vehicle team, and multiple transmission paths of multiple channels are formed in the vehicle team; (2) each vehicle node acquires position, speed and motion direction information of other vehicle nodes through an adaptive distributed position service; and (3) a multi-channel greedy forwarding algorithm is adopted: when the vehicle nodes send or forward a data message, a next-hop neighbor node is selected by use of the greedy forwarding algorithm according to the position of a destination node and the working channel utilization rate of a neighbor node until the data message reaches the destination node. By adopting the provided multi-channel multi-path routing protocol for vehicle team ad-hoc networks, vehicle team communication is enabled to be fully self-organized without relying on any infrastructure, adjacent vehicle nodes are allowed to communicate on different channel transmission paths at the same moment, the network throughput is improved, and the routing protocol supports multi-hop big data transmission and has practical application prospects.
Owner:SOUTH CHINA UNIV OF TECH

Virtual CPU scheduling method

The invention relates to a virtual central processing unit (CPU) scheduling method; belongs to the technical field of computer virtualization; and solves the problem that existing virtual CPU scheduling methods adopts fixed length time slices to perform scheduling on all virtual CPUs, accordingly virtual machine performance is affected due to resource limit. The virtual CPU scheduling method comprises the steps of initializing, updating virtual CPU credit values, joining the queue, selecting running the virtual CPU, processing integrated optics (OI) request bitmap, and running. The virtual CPU scheduling method sets scheduling time slices according to virtual CPU running state, during virtual CPU scheduling, and dynamically sets the scheduling time slices of the virtual CPU according to the IO request bitmap and a scheduling time slice table of a virtual machine which the virtual CPU belongs; the OI request bitmap reflects running characters of each virtual machine, accordingly virtual machines mainly basing on CPU operation have small switching overhead, virtual machines mainly basing on OI operation have short response delay, and further the effects of being suitable for various different application environment and meeting different application service type requirements are achieved.
Owner:HUAZHONG UNIV OF SCI & TECH

Switching method under heterogeneous cellular network

The invention discloses a switching method under a heterogeneous cellular network, which comprises the steps of: calculating a switching active threshold HAT by user equipment (UE) according to an effective switching threshold EHT and an active index AH which are transmitted by a macro base station, and determining a coverage area of the switching active area HAA of the macro base station according to the acquired HAT value; determining movement direction by the current UE according to reference signal receiving power RSRP values of the macro base station and a micro base station; if movement from the macro base station to the micro base station is determined, determining whether the UE enters the HAA area of the macro base station according to the reference signal receiving power RSRP value and HAT value of the macro base station, thereby determining whether switching from the macro base station to the micro base station is required; and if movement from the micro base station to the macro base station is determined, determining whether the UE enters the non-HAA area of the macro base station according to the reference signal receiving power RSRP value of the macro base station, thereby determining whether switching from the micro base station to the macro base station is required. The switching method under the heterogeneous cellular network according to the invention has the functions of: improving resource utilization rate, and reducing system performance reduction and ping-pong effect caused by switching time delay.
Owner:BEIJING UNIV OF POSTS & TELECOMM +1

Method and system for measuring carrier frequency information in wireless communication system

The invention discloses three schemes for a method for measuring carrier frequency information in a wireless communication system. One of the three schemes includes that a base station sends out a quiet period indication signaling at a downlink channel, and a terminal which successfully receives the quiet period indication signaling sent out by the base station measures the carrier frequency information at a first time-frequency resource and feeds back a measurement result of the carrier frequency information to the base station at a second time-frequency resource. The invention also discloses three schemes for a system for measuring the carrier frequency information in the wireless communication system. One of the three schemes includes that a carrier frequency information measurement and feedback unit is used on the condition that the base station sends out the quiet period indication signaling at the downlink channel, the terminal which successfully receives the quiet period indication signaling sent out by the base station measures the carrier frequency information at the first time-frequency resource and feeds back the measurement result of the carrier frequency information to the base station at the second time-frequency resource. By the aid of the method and the system, the base station can know channel quality situations of all terminals on the current carrier frequency.
Owner:ZTE CORP

Online energy consumption management method and apparatus of large-scale server cluster

The embodiments of the invention disclose an online energy consumption management method of a large-scale server cluster. Under the condition that it is ensured that the CPU utilization rate of starting servers is equal to a given target value, servers in a cluster are dynamically managed according to load conditions so as to enable energy consumption of the cluster to be the lowest. Variables are defined for each server model in the cluster, an energy consumption management problem is described into a planning problem, and accordingly, the switch state, the work frequency and the load of each server are determined according to a solving result of the planning problem. The planning variable defining mode used by the invention can greatly reduce the quantity of the variables, even if the method is applied to a large-scale cluster, the planning problem can still be solved in an online mode. The method allows the server frequency to be switched between two adjacent discrete frequencies so as to prevent performance waste, and at the same time, the quantity of the servers needing to be switched between the two frequencies is also reduced as much as possible so that the switching cost is decreased. The invention further discloses an online energy consumption management apparatus of a large-scale server cluster.
Owner:SHANTOU UNIV

Thread processing device and method and computer system

InactiveCN103514029AReduce the frequency of switching back and forthReduce switching overheadMultiprogramming arrangementsPower consumptionComputerized system
The invention provides a thread processing device and method and a computer system. The thread processing device comprises a collection unit, a calculation unit, a partition unit, a judgment unit and an execution unit. The collection unit is used for collecting a plurality of threads to be executed within the scheduled time. The calculation unit is used for calculating the memory access proportion of each thread in more threads in the executing process. The partition unit is used for partitioning the threads into n groups based on the memory access proportion of each thread. The judgment unit is used for judging whether the threads in one group of the n groups are executed completely or not. The execution unit is used for continuing to execute the threads in the one group on the condition that the threads in the one group of the n groups are not executed completely, and executing the threads of the other groups on the condition that the threads in the one group of the n groups have been executed completely. According to the thread processing device and method and the computer system, the frequentness of repeated switching of frequency can be reduced effectively, so switching expenditures are reduced effectively, power consumption is lowered and efficiency is improved.
Owner:SONY CORP

Group pre-handover authentication method based on fixed path, high-speed rail network communication platform

The invention belongs to the technical field of communication network security, and discloses a group pre-handover authentication method based on a fixed path and a high-speed rail network communication platform, including: an initialization authentication stage; a group pre-handover authentication stage based on a fixed path; a group collaboration based on a fixed path Pre-handover authentication phase. The SDN server of the present invention can know the fixed track information of the train and the location information of the base station in advance, and know the next base station to be connected to the MRN. The SDN server assists the pre-handover authentication and key negotiation between the MRN and the next base station in advance; therefore, when the MRN enters the range of the next base station, it can directly communicate with the base station. All the MRNs on the train in the present invention form a group to perform switching authentication, which can reduce the switching cost; considering the high-speed movement of the train, a coordinated switching process is added to ensure the continuity of services and further reduce the switching cost. The present invention can resist all currently known attacks.
Owner:XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products