Adversarial sample detection method and device and computer readable storage medium

A technology of adversarial samples and detection methods, applied in the field of data processing, can solve the problem of inability to accurately detect adversarial samples, and achieve the effect of improving accuracy

Active Publication Date: 2020-08-14
PENG CHENG LAB
View PDF4 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The main purpose of the present invention is to provide an adversarial sample detection method, device and computer-readable storage medium, aiming to solve the technical problem that the existing adversarial sample detection methods cannot accurately detect adversarial samples

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Adversarial sample detection method and device and computer readable storage medium
  • Adversarial sample detection method and device and computer readable storage medium

Examples

Experimental program
Comparison scheme
Effect test

no. 1 example

[0106] Based on the first embodiment, a second embodiment of the adversarial sample detection method of the present invention is proposed. In this embodiment, step S100 includes:

[0107] Step S110, input the file sample to be detected into the malicious code detection model, so as to obtain a plurality of feature maps corresponding to the file sample to be detected through the target channel of the target convolution layer in the malicious code detection model;

[0108] Step S120, based on each feature map, determine the contribution vector corresponding to the file sample to be detected;

[0109] Step S130, determining the contribution distribution vector based on the contribution vector and the file structure of the file sample to be detected.

[0110] In this embodiment, the file samples to be detected are input into the malicious code detection model for model training, and multiple feature maps corresponding to the file samples to be detected are obtained through the tar...

no. 2 example

[0114] Based on the second embodiment, a third embodiment of the adversarial sample detection method of the present invention is proposed. In this embodiment, step S120 includes:

[0115] Step S121, based on the output result of the classifier in the malicious code detection model, determine the weight corresponding to each of the feature maps;

[0116] Step S122 , performing weighted average on each of the feature maps based on each of the weights, and performing a noise filtering operation on the weighted average result to obtain the contribution vector.

[0117] In this embodiment, after obtaining a plurality of feature maps corresponding to the file samples to be detected, the output result of the classifier in the malicious code detection model is obtained, and the weights corresponding to each feature map are calculated according to the output results, and then based on each weight. The feature maps are weighted and averaged to obtain the weighted average result. Among ...

no. 4 example

[0135] Based on the fourth embodiment, the fifth embodiment of the adversarial sample detection method of the present invention is proposed. In this embodiment, step S132 includes:

[0136] Step S501, based on the position of the file header and the position of each section in the file structure, divide the sample of the file to be detected into blocks to obtain multiple file blocks;

[0137] Step S502, dividing each file block into blocks based on a preset rule to obtain a preset number of sub-file blocks corresponding to each file block;

[0138] Step S503, based on the contribution degree corresponding to each byte and each sub-file block, determine the contribution degree distribution vector.

[0139]In this embodiment, when the contribution corresponding to each byte in the file sample to be detected is obtained, the position of the file header and the positions of each section in the file structure are obtained, and based on the position of the file header and the positi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an adversarial sample detection method, which comprises the following steps of: inputting a to-be-detected file sample into a malicious code detection model to obtain a contribution degree distribution vector corresponding to the to-be-detected file sample; carrying out outlier analysis based on contribution degree distribution vector sets corresponding to various benign samples and the contribution degree distribution vectors to determine whether the contribution degree distribution vectors are outliers of a target contribution degree distribution vector set or not; and if the contribution degree distribution vector is an outlier of a target contribution degree distribution vector set, determining that the to-be-detected file sample is an adversarial sample. The invention further discloses an adversarial sample detection device and a computer readable storage medium. According to the method, the adversarial samples are accurately detected through the contribution degree distribution vectors, and the benign samples comprise various benign training samples, so that various adversarial samples can be detected, and the adversarial sample detection accuracy is further improved.

Description

technical field [0001] The present invention relates to the technical field of data processing, in particular to an adversarial sample detection method, device and computer-readable storage medium. Background technique [0002] At present, malicious code detection methods based on deep learning models have been widely used. However, deep learning models have certain limitations. For example, for deep learning detection methods that use PE files as input, attackers can , by changing a few bytes of the file without affecting the original function, it can mislead the detector to recognize the malicious code as a benign file, but the existing deep learning detection method cannot accurately detect the malicious code, causing the deep learning model to suffer Adversarial sample attack. [0003] Existing ways to defend against adversarial samples, such as model distillation, adversarial training, etc., enhance the ability to identify adversarial samples by improving the robustnes...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F21/56G06K9/62
CPCG06F21/563G06F18/24
Inventor 张伟哲乔延臣方滨兴张宾田志成
Owner PENG CHENG LAB
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products