A Defense Method for Adversarial Attacks of Deep Reinforcement Learning Models
A reinforcement learning and model technology, applied in neural learning methods, biological neural network models, platform integrity maintenance, etc., can solve problems such as neural network adversarial attacks, and achieve the effect of improving efficiency
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment Construction
[0032] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, and do not limit the protection scope of the present invention.
[0033] Such as figure 1 As shown, the defense method for deep reinforcement learning model anti-attack provided by the embodiment includes the following steps:
[0034] S101, using the visual prediction model built based on the generative confrontation network to predict the input environment state at the previous moment and output the current environment state, and obtain the predicted environment state value of the next frame under the deep reinforcement learning strategy for predicting the current environment state;
[0035] S102. Obtain the actual current environmen...
PUM
Abstract
Description
Claims
Application Information
- R&D Engineer
- R&D Manager
- IP Professional
- Industry Leading Data Capabilities
- Powerful AI technology
- Patent DNA Extraction
Browse by: Latest US Patents, China's latest patents, Technical Efficacy Thesaurus, Application Domain, Technology Topic, Popular Technical Reports.
© 2024 PatSnap. All rights reserved.Legal|Privacy policy|Modern Slavery Act Transparency Statement|Sitemap|About US| Contact US: help@patsnap.com