Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Task scheduling method based on depth reinforcement learning under vehicle network environment

A technology of reinforcement learning and task scheduling, applied in neural learning methods, biological neural network models, program startup/switching, etc., can solve problems such as tasks that cannot be completed in time

Active Publication Date: 2017-09-08
NANJING UNIV
View PDF1 Cites 27 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Cloud computing provides a wealth of resources for mobile terminals, but when users go to the cloud center, sometimes due to the problem of communication bandwidth, even if the cloud center has strong computing power, the task cannot be completed in time because of the communication delay.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Task scheduling method based on depth reinforcement learning under vehicle network environment
  • Task scheduling method based on depth reinforcement learning under vehicle network environment
  • Task scheduling method based on depth reinforcement learning under vehicle network environment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0100] In this embodiment, a certain area of ​​city A is used for experiments.

[0101] For this area, there are 10 roadside units, count the number of vehicles in each roadside unit within a certain period of time, unit (vehicle) {Q 1 ,Q 2 ,...Q 10}. Get the task queue length {L of each roadside unit 1 , L 2 ,... L 10}.

[0102] Secondly, initialize the neural network for task assignment as an input layer of 20 neurons, a first hidden layer of 7 neurons, a second hidden layer of seven neurons, and an output layer of 10 neurons .

[0103] Again, warm up the neural network, and record the response time and environment variables of the tasks within a period of time according to the strategy of random assignment.

[0104] Then, the profit value of each strategy is calculated according to the response time, and the profit value is standardized in order to clarify whether the strategy is good or bad.

[0105] Next, the neural network is updated based on the BP algorithm us...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a task scheduling method based on depth reinforcement learning under a vehicle network environment; the method comprises the following steps: 1, obtaining vehicle flowrate data in the coverage scope of each road side unit; 2, obtaining load data of each road side unit, and transferring the data to other road side units in a multicast mode; 3, building a depth nerve network, and initializing related variables; 4, randomly selecting local execution or loading to other road side unit for execution of a request arriving in a scope in the initial time, and recording the time duration of the request from arrival to finish; 5, updating the nerve network when the data collected in step 4 reaches certain level; 6, receiving newly arrived requests, and using the updated nerve network to distribute according to the vehicle flowrate in each area and the load of each road side unit; 7, collecting data, and repeating steps 5-6.

Description

technical field [0001] The invention belongs to the field of computer mobile cloud computing, and in particular relates to a task scheduling method based on deep reinforcement learning in a vehicle network environment. Background technique [0002] Mobile cloud computing (Mobile Computing) is a new cloud computing model that has emerged with the rapid development of mobile terminals in recent years. It provides abundant computing resources for mobile end users and cloud service providers. The mobile terminal can offload tasks to the cloud, and the cloud returns the calculation result to the mobile terminal to overcome the problem of limited computing power of the mobile terminal and reduce the power consumption of the mobile terminal. [0003] As a typical case of self-organizing network, vehicular ad-hoc network (VANET) can share data and offload tasks through vehicle-to-vehicle communication (V2V) and vehicle-to-infrastructure communication (V2I). With the development of...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/48G06F9/50G06F9/54G06N3/06G06N3/08H04L29/08
CPCH04L67/12G06F9/4881G06F9/5038G06F9/547G06N3/061G06N3/08
Inventor 窦万春费凡
Owner NANJING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products