Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Low-delay high-reliability V2V resource allocation method based on deep reinforcement learning

A technology of reinforcement learning and resource allocation, applied in the field of Internet of Vehicles, can solve the problems of not considering the energy consumption of V2V communication, the inability to expand large-scale networks, and the large transmission overhead, so as to maximize system energy efficiency, ensure reliability and delay requirements , The effect of maximizing energy efficiency

Active Publication Date: 2021-06-11
NANJING UNIV OF AERONAUTICS & ASTRONAUTICS
View PDF9 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

All the above works do not take into account the energy consumption brought by V2V communication
At the same time, because the resource allocation scheme using the centralized reinforcement learning architecture needs to report the vehicle information to the central controller, the transmission overhead is large, and it increases sharply with the increase of the network size, which makes the method unable to be extended to large networks; In the resource allocation scheme using a fully decentralized reinforcement learning architecture, each agent can only observe part of the information related to itself, which makes the trained model inaccurate

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Low-delay high-reliability V2V resource allocation method based on deep reinforcement learning
  • Low-delay high-reliability V2V resource allocation method based on deep reinforcement learning
  • Low-delay high-reliability V2V resource allocation method based on deep reinforcement learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] The core idea of ​​the present invention is to propose a low-latency and high-reliability V2V resource allocation method based on deep reinforcement learning in order to enable the communication between vehicles outside the coverage of the base station to meet the delay requirements while maximizing energy efficiency. .

[0037] The present invention is described in further detail below.

[0038] Step (1), considering the area not covered by the base station, in order to transmit data related to driving safety between vehicles (V2V), use URLLC slice resource blocks for communication;

[0039]Step (2), the training phase, at each step, the V2V agent informs the computing unit of the current local observation information. The real environment state includes the global channel state and the behavior of all agents, which are agnostic to individual agents. Each V2V agent can only obtain part of the information that it can obtain, that is, observation information. The obser...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention proposes a low-delay high-reliability resource allocation method based on deep reinforcement learning, and NR-V2X side chain resource allocation outside the coverage range of a base station is considered, wherein a vehicle schedules URLLC slice resources used by V2V users in a 5G network according to information observed by the vehicle and a Q network obtained in a training stage. In order to maximize the energy efficiency of V2V communication and ensure the reliability and time delay requirements of communication, a deep reinforcement learning architecture using centralized training and distributed execution is provided, and a model meeting the requirements is trained by means of a DDQN learning method. Modeling of targets and constraints in a resource allocation problem is converted into income design in deep reinforcement learning, the joint optimization problem of V2V user channel allocation and power selection can be effectively solved, and the performance is stable in optimization of a series of continuous action spaces.

Description

technical field [0001] The present invention relates to a vehicle networking technology, in particular to a resource allocation method for the vehicle network, and more specifically, to a low-latency and high-reliability vehicle-to-vehicle (V2V) communication based on deep reinforcement learning resource allocation method. Background technique [0002] Vehicle-to-everything (V2X) is a typical application of the Internet of Things (IoT) in the field of Intelligent Transportation System (ITS). A ubiquitous smart car network formed. The Internet of Vehicles shares and exchanges data according to agreed communication protocols and data interaction standards. It enables intelligent traffic management and services, such as improved road safety, enhanced road condition awareness, and reduced traffic congestion, through real-time perception and collaboration among pedestrians, roadside facilities, vehicles, networks, and the cloud. [0003] Deep reinforcement learning is a kind o...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04W4/46H04W24/06H04W72/04H04W72/08H04W72/54
CPCH04W4/46H04W24/06H04W72/0473H04W72/542Y02D30/70
Inventor 缪娟娟宋晓勤王书墨张昕婷雷磊
Owner NANJING UNIV OF AERONAUTICS & ASTRONAUTICS
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products