Model-universal deep neural network representation visualization method and device

A convolutional neural network and representation technology, applied in the field of intelligence, which can solve problems such as weak interpretability, inability to trust model answers, and limiting the practical application of deep models.

Pending Publication Date: 2020-07-10
厦门渊亭信息科技有限公司
View PDF3 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0002] Although the existing convolutional neural network can be an "end-to-end" model trained by constructing and labeling large data for error backpropagation to optimize parameters; it can have gratifying performance in some scenarios, but these convolutional neural network models are has the same problem that cannot be ignored: poor interpretability
That is, although the model gives a high accuracy rate, it cannot give more reliable information to explain the consideration basis for the given results
This situation has caused th

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Model-universal deep neural network representation visualization method and device
  • Model-universal deep neural network representation visualization method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0049] see figure 1 , a convolutional neural network representation visualization method, the method comprising:

[0050] Step S1: After the image to be visualized is input into the convolutional neural network, a first feature map is obtained, wherein the first feature map is the feature data generated by the layer to be visualized of the convolutional neural network;

[0051] Step S2: unpooling the first feature map to obtain a second feature map;

[0052] Step S3: correcting the second feature map by ReLU function to obtain a third feature map;

[0053] Step S4: Deconvolution or guided backpropagation of the third feature map to obtain the first visualized feature map;

[0054] Step S5: Displaying the first visualized feature map.

[0055] In the embodiment of the present disclosure, the feature activation of the layer to be visualized in the convolutional neural network model is displayed through anti-pooling, anti-activation, and deconvolution / guided backpropagation op...

Embodiment 2

[0078] see figure 2 , the convolutional neural network representation visualization device includes:

[0079] The first acquisition module 1 acquires a first feature map after the image to be visualized is input into the convolutional neural network, wherein the first feature map is the feature data generated by the layer to be visualized of the convolutional neural network;

[0080] Anti-pooling module 2, anti-pooling the first feature map to obtain a second feature map;

[0081] Correction module 3, correcting the second feature map through a ReLU function to obtain a third feature map;

[0082] Deconvolution module 4, deconvolution or guided backpropagation of the third feature map to obtain the first visualized feature map;

[0083] The first display module 5 is configured to display the first visualized feature map.

[0084] In one embodiment, the device also includes:

[0085] The second acquisition module acquires a fourth feature map, and the fourth feature map is...

Embodiment 3

[0096] Convolutional neural network training methods, including:

[0097] Execute the steps of any one of the convolutional neural network representation visualization methods described in Embodiment 1;

[0098] The verification result of the judgment of the convolutional neural network receiving the input;

[0099] If the verification result is correct, the image to be visualized is used as a training sample to train the convolutional neural network.

[0100] After outputting the first visualized feature map, the third visualized feature map or the third visualized feature map, the user can quickly judge the convolutional neural network based on the first visualized feature map, the third visualized feature map or the third visualized feature map Whether the judgment results are accurate, when the judgment basis and the judgment results are both accurate, the input verification result is correct, and when the verification result is correct, the image to be visualized is used...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

In order to solve the problem of poor releasability of a convolutional neural network in the prior art, the invention provides a convolutional neural network representation visualization method, a convolutional neural network representation visualization device and a training method, so as to improve the releasability of the convolutional neural network. The convolutional neural network representation visualization method comprises the steps: obtaining a first feature map after a to-be-visualized image is input into a convolutional neural network; depooling the first feature map to obtain a second feature map; correcting the second feature map through a ReLU function to obtain a third feature map; performing deconvolution or guided back propagation on the third feature map to obtain a first visual feature map; and displaying the first visual feature map. The invention further discloses a corresponding visualization device and a convolutional neural network training method based on therepresentation visualization method. Through depooling, deactivation, deconvolution and guided back propagation operation, the feature activation condition of the to-be-visualized layer in the convolutional neural network model is displayed, so the releasability of the convolutional neural network is improved.

Description

technical field [0001] The present disclosure relates to the field of intelligence, and in particular to a convolutional neural network representation visualization method, device and training method. Background technique [0002] Although the existing convolutional neural network can be an "end-to-end" model trained by constructing and labeling large data for error backpropagation to optimize parameters; it can have gratifying performance in some scenarios, but these convolutional neural network models are There is a similar problem that cannot be ignored: poor interpretability. That is, although the model gives a high accuracy rate, it cannot give more reliable information to explain the consideration basis for the given results. Such a situation has resulted in industries that require high interpretability, such as finance, medical care, and autonomous driving, in specific fields. Although the result data of the convolutional neural network model looks good, it cannot be...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06N3/04G06N3/08
CPCG06N3/084G06N3/045
Inventor 洪万福王彬钱智毅
Owner 厦门渊亭信息科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products