Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Adversarial sample defense method and device based on data disturbance

A technology against samples and data, applied in the direction of neural learning methods, combustion engines, internal combustion piston engines, etc., can solve the problems of long training period, inflexible defense methods, and difficulty in meeting the actual needs of autonomous driving scenarios, so as to improve the recognition accuracy Effect

Pending Publication Date: 2021-10-22
BEIHANG UNIV
View PDF3 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although the existing adversarial example defense methods for models have achieved good defense effects, targeted training is required and the training period is long. During use, the defense method is not flexible enough to meet the needs of autonomous driving scenarios. the actual needs of

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Adversarial sample defense method and device based on data disturbance
  • Adversarial sample defense method and device based on data disturbance
  • Adversarial sample defense method and device based on data disturbance

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0051] The technical content of the present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0052] figure 1 A schematic diagram of the application of the adversarial example defense method provided by the present invention in the automatic driving scene. In the autonomous driving scenario, the neural network model H(·) is implanted into the autonomous vehicle as the recognition neural network, and the data disturbance is the interference factor that affects the recognition image of the autonomous vehicle. The image here is the image information acquired by the self-driving vehicle during driving, such as street signs, road surfaces, roadblocks, and pedestrians.

[0053] Such as figure 2 As shown, the adversarial sample defense method provided by the embodiment of the present invention at least includes the following steps:

[0054] 101. Add data perturbation to input samples to form defense samples;

[...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an adversarial sample defense method and device based on data disturbance. The method comprises the following steps: adding pixels capable of interfering a vehicle recognition guideboard into an input sample as data disturbance to form a defense sample; inputting the defense sample into the target neural network model for optimization, and outputting trained data disturbance; adding the trained data disturbance into a to-be-recognized sample of the recognition neural network for recognition by the recognition neural network; the recognition neural network is a neural network model implanted in the autonomous vehicle. The robustness of the neural network model can be improved, and the method is particularly suitable for being applied to an automatic driving scene.

Description

technical field [0001] The invention relates to an automatic driving scene-oriented, data disturbance-based confrontation sample defense method and a corresponding data disturbance optimization device, belonging to the technical field of automatic driving. Background technique [0002] Adversarial samples refer to samples that can cause machine learning models to make wrong judgments by adding interference. In the past few years, deep neural networks have achieved remarkable achievements in a wide range of application areas such as computer vision, natural language processing, and speech recognition. However, related research proves that deep neural networks are easily affected by artificially designed adversarial examples. These carefully constructed perturbations are imperceptible to humans, but can easily lead deep neural networks to make wrong judgments. This poses a security challenge to the application of deep neural networks in scenarios with high reliability requir...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/08
CPCG06N3/08G06N3/045Y02T10/40
Inventor 王嘉凯尹子鑫汤力伟刘艾杉刘祥龙
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products