Resource allocation method and device based on reinforcement learning in ultra-dense network

A resource allocation device and ultra-dense network technology, applied in the field of resource allocation based on reinforcement learning, can solve the problem that reinforcement learning cannot meet the dense connection of ultra-dense networks, and achieve the effect of improving energy efficiency and achieving load balancing

Active Publication Date: 2019-08-30
UNIV OF SCI & TECH BEIJING
View PDF4 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The technical problem to be solved by the present invention is to provide a resource allocation method and device based on reinforcement learnin

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Resource allocation method and device based on reinforcement learning in ultra-dense network
  • Resource allocation method and device based on reinforcement learning in ultra-dense network
  • Resource allocation method and device based on reinforcement learning in ultra-dense network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0047] like figure 1 As shown, the resource allocation method based on reinforcement learning in the ultra-dense network provided by the embodiment of the present invention includes:

[0048] S101, analyze the current state of the network based on Q-learning, and obtain the association strategy between the user and the base station and the transmission power control strategy of the base station when the energy efficiency of the network is maximized;

[0049] S102, associate the user with the base station according to the obtained association strategy between the user and the base station when the energy efficiency of the network is maximized;

[0050] S103, on the basis of association between the user and the base station, control the transmit power of the base station in the network according to the acquired transmit power control strategy of the base station when the energy efficiency of the network is maximized.

[0051] The resource allocation method based on reinforcemen...

Embodiment 2

[0079] The present invention also provides a specific embodiment of a resource allocation device based on reinforcement learning in an ultra-dense network, because the resource allocation device based on reinforcement learning in an ultra-dense network provided by the present invention is the same as the resource allocation device based on reinforcement learning in the aforementioned ultra-dense network Corresponding to the specific implementation of the method, the resource allocation device based on reinforcement learning in the ultra-dense network can realize the purpose of the present invention by executing the process steps in the specific implementation of the above method, so the resource allocation based on reinforcement learning in the above-mentioned ultra-dense network The explanations in the specific implementation of the allocation method are also applicable to the specific implementation of the resource allocation device based on reinforcement learning in the ultra...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a resource allocation method and device based on reinforcement learning in an ultra-dense network, which can realize load balance of the network and improve the energy efficiency of the network. The method comprises the following steps: analyzing the current state of a network based on Q learning to obtain an association strategy of a user and a base station and a transmitting power control strategy of the base station when the energy efficiency of the network is maximized; associating the user with the base station according to the obtained association strategy of the user and the base station when the network energy efficiency is maximized; and on the basis that the user is associated with the base station, controlling the transmitting power of the base station inthe network according to the obtained transmitting power control strategy of the base station when the network energy efficiency is maximized. The invention relates to the technical field of communication.

Description

technical field [0001] The invention relates to the field of communication technology, in particular to a resource allocation method and device based on reinforcement learning in an ultra-dense network. Background technique [0002] With the rapid development of mobile terminals, the demand for network capacity increases dramatically. Deploying a large number of small base stations, such as femtocell base stations (BSs), microcell BSs and picocell BSs can enhance network capacity. Ultra-dense network is a new network architecture in the fifth generation of mobile communication, which can shorten the distance between users and low-power base stations, and improve system capacity and spectrum efficiency. When the network architecture changes from traditional architecture to ultra-dense network, it also faces many new challenges, such as network design, resource allocation and user association. [0003] In an ultra-dense network, users and low-power base stations are very den...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): H04W28/08H04W52/26H04W52/40
CPCH04W28/08H04W52/265H04W52/40
Inventor 张海君李东任冶冰刘玮董江波姜春晓皇甫伟隆克平
Owner UNIV OF SCI & TECH BEIJING
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products