Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Resource allocation method for reinforcement learning in ultra-dense network

An ultra-dense network and resource allocation technology, applied in network topology, wireless communication, power management, etc., can solve problems such as cross-layer interference in 5G networks that cannot be solved well

Inactive Publication Date: 2017-01-25
BEIJING UNIV OF CHEM TECH
View PDF5 Cites 47 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In order to make full use of spectrum resources, the two-tier network shares the same frequency band, but also introduces co-frequency interference, and the existing technology cannot well solve the problem of cross-layer interference in the 5G network. Based on this, the present invention focuses on Research on applying self-optimization technology in self-organizing network to ultra-dense network to realize self-organizing allocation of resources

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Resource allocation method for reinforcement learning in ultra-dense network
  • Resource allocation method for reinforcement learning in ultra-dense network
  • Resource allocation method for reinforcement learning in ultra-dense network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0037] The main idea of ​​the present invention is to detect the state of the current channel by simulating the communication environment, establishing a model, initializing the learning factor, conjecture, transmission strategy and evaluation function Q, and the state indication parameters include SIR, transmit power, and state Wait. Select the current action according to the transmission strategy, compare the detected signal-to-interference ratio with a given threshold, if it is greater than the threshold, get a reward, if it is smaller than the threshold, the reward is set to zero, and use the guess-based Q update formula to get a new According to the Q value, the strategy and guess of the next moment are obtained through the greedy strategy according to the Q value, the state of the next moment is updated, and the next communication state is entered, and the above learning process is repeated. The power allocation scheme is evaluated with the Q value as the performance eva...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A resource allocation method for reinforcement learning in an ultra-dense network is provided. The invention relates to the field of ultra-dense networks in 5G (fifth generation) mobile communications and provides a method for allocating resources between a home node B and a macro node B, between a home node B and another home node B and between a home node B and a mobile user in a dense deployment network; the method is implemented through power control, each femotcell is considered as an intelligent body to jointly adjust transmitting powers of home node Bs, the densely deployed home node Bs are avoided causing severe jamming to a macro node B and an adjacent B when transmitting at maximum powder, and system throughput is maximized; user delay QoS is considered, and traditional 'Shannon capacity' is replaced with 'available capacity' that may ensure user delay; a supermodular game model is utilized such that whole network power distribution gains Nash equilibrium; the reinforcement learning method Q-learning is utilized such that the home node B has learning function, and optimal power distribution can be achieved; by using the resource allocation method, it is possible to effectively improve the system capacity of an ultra-dense network at the premise of satisfying user delay.

Description

technical field [0001] This article relates to the field of mobile communication, in particular, the present invention is a resource allocation method for an ultra-dense heterogeneous network (Ultra Dense Network, UDN) in a fifth-generation (5th-generation) mobile communication system. Background technique [0002] Mobile networks have now entered a stage of rapid popularization. At the same time, countries around the world are actively researching 5G technology, and 5G standards have also begun to emerge. It is a distinctive feature of 5G to take the lead in using cognitive radio technology to automatically determine the frequency band provided by the network to achieve multi-network integration. my country's 5G work has also achieved initial results. The main goal of the 5G network is user experience, and the network needs to be redesigned and optimized in terms of capacity, speed, and delay. At the same time, the 5G network needs to accommodate a large number of termina...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04W72/12H04W52/24H04W84/18
CPCH04W52/241H04W52/244H04W84/18H04W72/542
Inventor 张海君王文韬孙梦颖郝匀琴周平阳欣豪
Owner BEIJING UNIV OF CHEM TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products