Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Reinforcement Learning-Based Graph Adversarial Example Generation Method by Adding False Nodes for Document Classification

A technology of adversarial examples and reinforcement learning, applied in the field of artificial intelligence information security, can solve problems such as difficult to obtain, difficult to achieve, misleading target node classification results, etc.

Active Publication Date: 2021-06-29
ZHEJIANG UNIV
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In the field of graph data, current research is to make the classification results of target nodes mislead by adding or deleting existing edge or node features.
But this method may be difficult to implement in actual scenarios. For example, in a social network, if you want to delete or add an edge between two users, you may need to obtain the login permissions of these users, but this is difficult to obtain in actual situations.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Reinforcement Learning-Based Graph Adversarial Example Generation Method by Adding False Nodes for Document Classification
  • A Reinforcement Learning-Based Graph Adversarial Example Generation Method by Adding False Nodes for Document Classification
  • A Reinforcement Learning-Based Graph Adversarial Example Generation Method by Adding False Nodes for Document Classification

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] The present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be noted that the following embodiments are intended to facilitate the understanding of the present invention, but do not limit it in any way. This embodiment describes the specific implementation of the present invention in detail, and uses public data sets to verify the effect of this implementation.

[0033] The overall process of the method of the present invention is as follows figure 1 shown.

[0034]For a graph data (A, X) with a total of Y labels, and a trained graph node classification model M, first input the graph data to the model M, calculate the classification result of each node, and select the correct one The nodes constitute the attack target node set V, and for each node v in the set V, assign the attack target label (the target label is a wrong category label) to form the attack target (v, y), thus forming the attack ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a graph adversarial sample generation method by adding false nodes based on reinforcement learning, including: (1) obtaining the original graph number and graph node classification model, constructing a training set and a test set; adding false nodes to the original graph data , get the initial adversarial samples; (2) construct the attack model; (3) select the attack target in the training set; (4) input the current adversarial samples and attack targets into the attack model, select the node with the largest evaluation value, and construct a new adversarial sample; (5) input the new adversarial sample into the classification model, if the classification result is the target result, get the adversarial sample and proceed to the next step, otherwise skip to step (4); (6) train the attack model, and use the training Good attack models to test and apply. The invention generates graph adversarial samples by adding false nodes, which can help to design a more robust graph deep learning model.

Description

technical field [0001] The invention belongs to the technical field of artificial intelligence information security, and in particular relates to a method for generating graph confrontation samples by adding false nodes based on reinforcement learning. Background technique [0002] A graph in graph theory is a graph composed of a number of given points and a line connecting two points. This graph is usually used to describe a certain relationship between certain things. Points represent things, and points connect two points. The line of represents that there is a certain relationship between the corresponding two things. The graph G in graph theory is an ordered pair (V, E), where V is called the vertex set, which is the set of all vertices in the graph, and E is called the edge set, which is the set of edges between all vertices. gather. Simply put, vertices represent things, and edges represent relationships between things. In addition, the attribute graph (Attributed G...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62G06K9/66
CPCG06V30/194G06F18/214
Inventor 李莹陈裕尹建伟邓水光
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products