Supercharge Your Innovation With Domain-Expert AI Agents!

An edge caching method, device and electronic device based on reinforcement learning

A technology of reinforcement learning and caching, applied in machine learning, electrical components, instruments, etc., can solve problems such as large impact and waste of cache space, achieve small coverage, reduce delay, improve cache hit rate and utilize cache space rate effect

Active Publication Date: 2021-06-04
BEIJING UNIV OF POSTS & TELECOMM
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, the existing method of caching content is realized based on the request probability of content within a preset time period of the small cell base station, and the content is cached based on the number of requests for content within the preset time period of the small cell base station, so that The content cached in the small cell base station is greatly affected by the preset time period set
Moreover, in practical applications, the content required by the user changes, so that the cached content corresponding to a higher probability of request within the preset time period is not necessarily the content required by the user, resulting in the cached content may not be the user's choice hit content with a high rate, resulting in a waste of cache space to a certain extent

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An edge caching method, device and electronic device based on reinforcement learning
  • An edge caching method, device and electronic device based on reinforcement learning
  • An edge caching method, device and electronic device based on reinforcement learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment approach

[0107] As an optional implementation mode of the embodiment of the present invention, such as image 3 As shown, using the pre-trained caching strategy model based on the reinforcement learning algorithm, the content popularity feature matrix and the content caching situation of the current small cell are processed to obtain the implementation of the caching strategy of each content in the current small cell. Can include:

[0108] S1031. Determine action vectors corresponding to all content according to the current content caching status of the small cell.

[0109] In the embodiment of the present invention, the pre-trained cache policy model based on the reinforcement learning algorithm is obtained based on the sample content popularity feature matrix, the content cache situation corresponding to the sample content, and the action vector corresponding to the sample content within a period of time. And after training the caching strategy model for the request and cache status...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

An embodiment of the present invention provides an edge caching method, device, and electronic device based on reinforcement learning. The method includes: obtaining the number of requests for all content in multiple small cells within a preset time period and the content caching status of the current small cell ;Based on the number of requests for all content in multiple small cells, calculate the first content popularity and second content popularity corresponding to each content, and obtain the content popularity feature matrix; use the pre-trained cache built based on the reinforcement learning algorithm The policy model processes the content popularity characteristic matrix and the content caching situation of the current small cell to obtain the caching strategy of each content in the current small cell; based on the caching strategy of each content in the current small cell, the content in the current small cell for caching. By applying the embodiment of the present invention, the cache hit rate and the utilization rate of the cache space of the cache content of the small cell can be improved.

Description

technical field [0001] The present invention relates to the technical field of wireless communication, in particular to an edge caching method, device and electronic equipment based on reinforcement learning. Background technique [0002] With the rapid development of wireless communication technology, the demand for mobile data traffic has increased sharply, and dense small cell networks have emerged as the times require. In a dense small cell network, content distribution and sharing are included, such as video files, news, alerts, and so on. When a user needs to request content, the user sends a content request to the small cell base station, the base station forwards the content request to the core network, the core network sends the service data corresponding to the content request to the base station, and the base station then sends the service data corresponding to the content request to user. As the demand for services such as mobile video continues to increase, a ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04W28/14H04L29/08G06N20/00
CPCH04W28/14G06N20/00H04L67/5682H04L67/568
Inventor 范绍帅胡力芸田辉
Owner BEIJING UNIV OF POSTS & TELECOMM
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More