Distributed adaptive stable topology generation method based on reinforcement learning
A technology of reinforcement learning and topology generation, which is applied in the field of communication, can solve problems such as not considering the comprehensive influence of links, failure to transmit information in time, and increased energy consumption of nodes, so as to achieve real-time model update, improve stable topology connections, and reduce energy consumption Effect
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment 1
[0034] The mobile ad hoc network plays an important role in the communication network without infrastructure. The network has no infrastructure support. Each mobile node has both router and host functions, and can form any network topology through wireless connection. Mobile ad hoc networks have broad application prospects in military communications, mobile networks, connecting personal area networks, emergency services and disaster recovery, and wireless sensor networks. Therefore, the mobile ad hoc network has also become one of the current research hotspots. The mobility of nodes in the mobile ad hoc network causes the network topology formed by the entire wireless channel to change at any time. In order to effectively reduce the impact of dynamic topology changes, the existing methods use the mobility of nodes to predict the link connection in the network. The degree of stability and network topology to reduce the impact of dynamic topology changes. However, the existing ...
Embodiment 2
[0053] The distributed self-adaptive stable topology generation method based on reinforcement learning is the same as embodiment 1, the reinforcement learning described in step 4 in the present invention, the current node receives the received signal strength value (RSSI) of the neighbor node and needs to carry out partition processing, Only when the RSSI value falls into the interval [a,b], adaptive reinforcement learning is required, which specifically includes the following steps:
[0054] Step 4.1 Determine the overall structure of reinforcement learning: the overall structure of the reinforcement learning model is that each mobile node in the interval [a,b] is regarded as an Agent, so that the dynamic changes of the entire network can be considered as a distributed multi-agent agent collaboration system. For each distributed agent Agent, suppose its environment state set is S, action set is A, and the reward function is The action selection strategy is π(s i ,a j ). ...
Embodiment 3
[0076] The distributed self-adaptive stable topology generation method based on reinforcement learning is the same as embodiment 1-2, and the update formula of the adaptive interval described in step 6 of the present invention is as follows:
[0077]
[0078] In the formula: a is the upper boundary of the interval; b is the lower boundary of the interval; RSSI is the received signal strength indicator value of the neighbor node; s' is the actual connection variable state of the node and the neighbor node at the next moment; It is the prediction of the joint variable state between the node and the neighbor node at the next moment. In the self-adaptive interval update process of the present invention, the first condition that needs to be satisfied is It indicates that the prediction of the current node does not match the actual connection variable state, and it also indicates that the node has made an error during the boundary adjustment process. On this basis, when the RS...
PUM
Login to View More Abstract
Description
Claims
Application Information
Login to View More 


