Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A social network public opinion evolution method

A social network and public opinion technology, applied in the field of Nash equilibrium strategy, can solve problems such as the inability to guarantee the global optimality of the algorithm, and achieve the effect of maximizing benefits

Active Publication Date: 2022-05-13
DONGGUAN UNIV OF TECH
View PDF10 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In addition, the strategy based on imitation cannot guarantee that the algorithm can learn the global optimum, because the agent's strategy depends on the strategy of the leader or the imitated, and the strategy of the leader is not always the best. OK

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A social network public opinion evolution method
  • A social network public opinion evolution method
  • A social network public opinion evolution method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0044] The present invention will be described in further detail below in conjunction with the accompanying drawings and embodiments.

[0045] The Nash equilibrium strategy on the continuous action space of the present invention is extended from the single-agent reinforcement learning algorithm CALA [7] (Continuous Action Learning Automata, continuous action learning automata), by introducing WoLS (Win or Learn Slow, learning quickly when winning) mechanism, so that the algorithm can effectively deal with learning problems in a multi-agent environment. Therefore, the Nash equilibrium strategy of the present invention is referred to as: WoLS-CALA (Win or Learn Slow Continuous Action Learning Automaton, win is fast-continuous action learning automaton) . The present invention firstly describes the CALA in detail.

[0046] Continuous Action Learning Automata (CALA) [7] is a policy gradient reinforcement learning algorithm for learning problems in continuous action spaces. Among...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a social network public opinion evolution method, which belongs to the field of reinforcement learning methods. The social network public opinion evolution method of the present invention includes two types of intelligent bodies, respectively Gossiper type intelligent bodies simulating the general public in social networks and Media type intelligent bodies simulating media or public figures for the purpose of attracting general public in social networks, wherein, The Media agent adopts the Nash equilibrium strategy on the continuous action space to calculate its optimal reward concept, update its concept and broadcast it in the social network. The invention has the beneficial effects of: maximizing its own interests in the process of interacting with other intelligent agents, and finally being able to learn Nash equilibrium.

Description

technical field [0001] The present invention relates to a Nash equilibrium strategy, in particular to a Nash equilibrium strategy on a continuous action space, and also to a social network public opinion evolution model based on the Nash equilibrium strategy on the continuous action space. Background technique [0002] In the environment of continuous action space, on the one hand, the agent’s choice of actions is infinite, and the traditional Q-based tabular algorithm cannot store infinite reward estimates; on the other hand, in a multi-agent environment, continuous The action space will also increase the difficulty of the problem. [0003] In the field of multi-agent reinforcement learning algorithms, the action space of an agent can be a discrete finite set or a continuous set. Because the essence of reinforcement learning is to find the optimum through continuous trial and error, and the continuous action space has infinitely many action options, and the multi-agent env...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F16/9536
CPCG06Q50/01
Inventor 侯韩旭郝建业张程伟
Owner DONGGUAN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products