A Reinforcement Learning-Based Graph Adversarial Example Generation Method by Adding False Nodes for Document Classification
A technology of adversarial examples and reinforcement learning, applied in the field of artificial intelligence information security, can solve problems such as difficult to obtain, difficult to achieve, misleading target node classification results, etc.
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment Construction
[0032] The present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be noted that the following embodiments are intended to facilitate the understanding of the present invention, but do not limit it in any way. This embodiment describes the specific implementation of the present invention in detail, and uses public data sets to verify the effect of this implementation.
[0033] The overall process of the method of the present invention is as follows figure 1 shown.
[0034]For a graph data (A, X) with a total of Y labels, and a trained graph node classification model M, first input the graph data to the model M, calculate the classification result of each node, and select the correct one The nodes constitute the attack target node set V, and for each node v in the set V, assign the attack target label (the target label is a wrong category label) to form the attack target (v, y), thus forming the attack ...
PUM
Abstract
Description
Claims
Application Information
- R&D Engineer
- R&D Manager
- IP Professional
- Industry Leading Data Capabilities
- Powerful AI technology
- Patent DNA Extraction
Browse by: Latest US Patents, China's latest patents, Technical Efficacy Thesaurus, Application Domain, Technology Topic, Popular Technical Reports.
© 2024 PatSnap. All rights reserved.Legal|Privacy policy|Modern Slavery Act Transparency Statement|Sitemap|About US| Contact US: help@patsnap.com