Collaborative edge caching algorithm based on deep reinforcement learning in ultra-dense network

An ultra-dense network and reinforcement learning technology, applied in the field of collaborative edge caching algorithm, can solve problems such as inefficient use of experience samples, overestimation of Q value, and accelerated learning speed

Pending Publication Date: 2020-11-20
HOHAI UNIV CHANGZHOU
View PDF0 Cites 17 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the traditional DQN algorithm usually overestimates the Q value, so a DoubleDQN algorithm is used, which is based on the Double Q-learning algorithm, which can effectively solve the overestimation problem of the DQN algorithm
In addition, the traditional DQN algorithm usually uses random uniform sampling to extract experience samples from the experience playback memory to update the Q network, that is, each experience sample has the same probability of being selected, resulting in very few but particularly high-value experience samples. is efficiently utilized, so a Prioritized Experience Replay technique is used to solve the sampling problem, thus accelerating the learning speed

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Collaborative edge caching algorithm based on deep reinforcement learning in ultra-dense network
  • Collaborative edge caching algorithm based on deep reinforcement learning in ultra-dense network
  • Collaborative edge caching algorithm based on deep reinforcement learning in ultra-dense network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0088] In order to enable those skilled in the art to better understand the technical solutions in the application, the technical solutions in the embodiments of the application are clearly and completely described below. Obviously, the described embodiments are only part of the embodiments of the application, and Not all examples. Based on the embodiments in this application, all other embodiments obtained by persons of ordinary skill in the art without creative efforts shall fall within the scope of protection of this application.

[0089] A collaborative edge caching algorithm based on deep reinforcement learning in ultra-dense networks, the specific steps are as follows:

[0090] Step 1: Set the parameters of the system model;

[0091] Step 2: Use the Double DQN algorithm to make an optimal cache decision for each SBS to maximize the total content cache hit rate of all SBSs, including the total cache hit rate hit by the local SBS and the total cache hit rate hit by other ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a collaborative edge caching algorithm based on deep reinforcement learning in an ultra-dense network. The collaborative edge caching algorithm comprises the following specificsteps: 1, setting each parameter of a system model; and 2, making an optimal cache decision for each SBS by adopting a Double DQN algorithm so as to maximize the total content cache hit rate of all the SBSs. According to the algorithm, a DQN algorithm and a Double Q-learning algorithm are combined, so that the over-estimation problem of the DQN algorithm on a Q value is effectively solved. In addition, the algorithm adopts a priority experience playback technology, so that the learning speed is increased. The method further comprises a step 3, making an optimal bandwidth resource allocation decision for each SBS by adopting an improved branch and bound method so as to minimize the total content downloading delay of all user equipment. According to the method, the content downloading delayof all users in the ultra-dense network can be effectively reduced, the content cache hit rate and the spectrum resource utilization rate are improved, and the method has good robustness and expandability and is suitable for the large-scale user-intensive ultra-dense network.

Description

technical field [0001] The invention relates to a cooperative edge caching algorithm based on deep reinforcement learning in an ultra-dense network, and belongs to the field of edge caching in an ultra-dense network. Background technique [0002] In the 5G era, with the popularity of smart mobile devices and mobile applications, mobile data traffic is experiencing explosive growth. In order to meet the requirements of high capacity, high throughput, high user experience rate, high reliability, and wide coverage of 5G networks, ultra-dense networks (Ultra-Dense Networks, UDN) came into being. UDN densely deploys low-power small base stations (Small Base Stations, SBS) in indoor and outdoor hotspot areas (such as office buildings, shopping malls, subways, airports, tunnels, etc.) within the coverage of MBS (Macro Base Station, MBS) to improve Network capacity and spatial multiplexing, while making up for blind areas that cannot be covered by macro base stations (MBS). [000...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H04W28/14H04L29/08
CPCH04L67/1097H04W28/14
Inventor 韩光洁张帆
Owner HOHAI UNIV CHANGZHOU
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products