Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Content caching method based on deep learning

A content caching and deep learning technology, applied in the field of content caching based on deep learning, can solve the problems of inaccurate prediction of model popularity, low cache space utilization, low cache hit rate, etc. Good hit rate and good prediction accuracy

Pending Publication Date: 2021-05-28
NANJING UNIV
View PDF0 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, in the above methods, the model popularity prediction is not accurate enough, and the cache decision is too dependent on the predicted popularity ranking. Its parameters can only be updated regularly on a rough time scale, and cannot cope with and adapt to emergencies (such as the emergence of new hot content) In other cases, it is easy to cause problems such as low cache hit rate, increased network delay, and low utilization of cache space.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Content caching method based on deep learning
  • Content caching method based on deep learning
  • Content caching method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0033] refer to figure 1 , a high-speed content caching method based on deep learning in this embodiment, the specific steps are as follows:

[0034] Step 1. Collect user request information of edge nodes in an area, mainly including the ID of the content requested by the user, the ID of the user, and the timestamp of the requested content, and sort according to the timestamp to construct a time series.

[0035] Step 2. According to the content request sequence sorted by time stamp collected in step 1, a probability sliding window is constructed based on a fixed request length N. The time step of the sliding window movement is m (mi (i=1, 23,..., n) the number of occurrences O in the probability window at the current moment t i (t), then the content C i The content popularity of the probability window at the current moment t is defined as: ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a content caching method based on deep learning. The method comprises the following steps: (1) collecting user request information of edge nodes to construct a time request sequence; (2) calculating content popularity according to the request sequence data; (3) carrying out maximum and minimum normalization processing; (4) converting a time sequence prediction problem into a supervised learning problem; (5) offline training a popularity prediction model based on the time convolutional network; (6) calling a popularity prediction model to predict popularity, performing weighted summation on predicted popularity data and historical popularity data based on index average, and calculating a content value; and (7) performing cache decision by using the LRU. Popularity distribution of the corresponding content can be predicted only according to the characteristic of the content request sequence, meanwhile, balance can be achieved between long-term memory and short-term sudden memory in combination with historical content popularity information, and good effects can be achieved on the aspects of prediction accuracy and cache hit rate improvement.

Description

technical field [0001] The invention relates to the technical field of edge caching in mobile communications, in particular to a content caching method based on deep learning. Background technique [0002] With the continuous and rapid popularization of various smart devices and advanced mobile application services, wireless networks have been under unprecedented data traffic pressure in recent years. Ever-increasing mobile data traffic puts enormous pressure on backhaul links with limited capacity, especially during traffic peaks. The content caching technology at the edge can effectively save the time and resources required to request and transmit content from the upper-level content server or original content server by placing the most popular content at the end user or the base station closer to the requesting user, and can reduce data flow. However, due to the limited cache capacity of nodes and the change of content popularity with time and space, content caching tec...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/08G06F9/50
CPCG06N3/049G06N3/084G06F9/5072G06N3/045
Inventor 张旭漆政南马展
Owner NANJING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products