Unlock instant, AI-driven research and patent intelligence for your innovation.

Vehicle-mounted edge task centralized scheduling and resource allocation joint optimization method based on deep reinforcement learning

A technology of reinforcement learning and resource allocation, applied in the field of in-vehicle mobile edge computing, which can solve problems such as server load imbalance

Active Publication Date: 2021-09-21
JIANGSU UNIV
View PDF3 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In view of the above problems, the present invention proposes a software-defined in-vehicle task edge scheduling and resource allocation decision-making method based on deep learning to solve the problem of unbalanced server load caused by computing in-vehicle tasks. The method includes the following steps:

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Vehicle-mounted edge task centralized scheduling and resource allocation joint optimization method based on deep reinforcement learning
  • Vehicle-mounted edge task centralized scheduling and resource allocation joint optimization method based on deep reinforcement learning
  • Vehicle-mounted edge task centralized scheduling and resource allocation joint optimization method based on deep reinforcement learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment approach

[0082] Such as figure 1 As shown, it is assumed that vehicle j will carry task Q at this time j Send to RSU, then according to the specific implementation mode of the present invention is as follows:

[0083] (1) Use the SDN controller to collect relevant information. The set of edge servers in each local area network is ser, the clock cycle set h of the edge server, the CPU occupancy rate set util of the edge server, the vehicle task set q to be processed, and the CPU cycle set m occupied by each vehicle task;

[0084] (2) According to the data obtained in (1), calculate the task Q j The calculation delay of:

[0085]

[0086] (3) SDN summarizes the information of other vehicles and edge servers, and calculates the calculation delay of vehicle tasks in all servers:

[0087]

[0088] (4) SDN summarizes the load information of the edge server, and converts the decision-making method of on-board task edge scheduling and resource allocation into solving the following ma...

specific Embodiment approach

[0090] (5) Use the DDQN algorithm to solve the mathematical problem in (4). The specific implementation is as follows:

[0091] 1. First obtain the initialization state, that is, the current vehicle task and the relevant information of the edge server. The current Q network generates action A according to the state S, and action A is the computing resource allocated to each task. The specific method is A=maxQ(φ(S),a,ω), which means that in the current state S, the neural network ω selects the action with the largest Q value from all actions a according to the feature vector φ(S) of the state S .

[0092] 2. Calculate the reward R according to the state S and action A, and generate a new state S'. After calculating the current on-board tasks, the number of on-board tasks waiting to be calculated and the various states of the edge server have changed, and the new state is S';

[0093] 3. Store the previously obtained {φ(S), A, R, φ(S')} into the experience playback pool, whi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a vehicle-mounted edge task centralized scheduling and resource allocation joint optimization method based on deep reinforcement learning. The method comprises the following steps: 1, acquiring information of vehicle tasks accessible RSUs, information of vehicle-mounted tasks and the like; 2, converting a vehicle-mounted task edge scheduling and resource allocation decision-making method into a mathematical problem, and carrying out mathematical modeling; 3, solving the mathematical model in the step 2 by using a deep reinforcement learning method; and 4, deploying the algorithm to a software-defined central controller. According to the method, the influence of each vehicle-mounted task on the load of an edge computing server and the influence among the vehicle-mounted tasks are fully considered, and the benefit of an edge computing server provider is maximized while the vehicle-mounted tasks are completed within the specified time and the load balance of each edge computing server is ensured.

Description

technical field [0001] The invention belongs to the field of vehicle-mounted mobile edge computing, and is a method for vehicle-mounted task edge scheduling and resource allocation in the environment of small-cell base stations. It is especially suitable for load balancing of small base stations in the LAN. Background technique [0002] Internet of Vehicles (loV) is an emerging technology, which connects vehicle devices through the network and enables vehicle devices to cooperate with other computing devices. The continuous development of various vehicle applications such as high-precision navigation, danger perception, and automatic driving has improved the convenience and safety of vehicle users, but at the same time, each application has higher and higher requirements for vehicle computing performance. In this case, the traditional cloud-centric computing paradigm cannot adapt to a large number of computing tasks. In response to this challenge, a new computing paradigm ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04L29/08H04L12/24H04W28/08G06N3/08
CPCH04L67/1008H04L67/10H04L67/12H04L41/145H04L41/142H04W28/09H04W28/0958G06N3/08Y02T10/40
Inventor 李致远徐丙磊彭二帅毕俊蕾
Owner JIANGSU UNIV
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More