Method for multi-channel secret information transmission through capsule network
A secret information, multi-channel technology, applied in the field of multi-channel secret information transmission through the capsule network, to achieve a strong practical effect
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment 1
[0035] see Figure 1 ~ Figure 3 , present a method for multi-channel secret information transmission via capsule networks. It is characterized in that, during the training process of the capsule network, multiple copies of different secret information are embedded into the capsule network; each receiver can use his own key to extract the corresponding information in the capsule network, and the other parts of the secret information information, the receiver cannot determine the existence of the secret information, let alone extract it; in addition, the parameters of the information extraction network are directly generated by the key without training, so there is no need to transmit the information extraction network to the receiver, only need to hold The secret information can be extracted with the correct key;
[0036] Such as figure 2 As shown, the capsule network architecture used is composed of two convolutional layers - Conv1 layer and PrimaryCaps layer, and a fully c...
Embodiment 2
[0054] Such as figure 1 As shown, a method for multi-channel secret information transmission through capsule network. This example uses the MNIST image dataset as an example to transmit secret information to 10 recipients at the same time.
[0055] (1) Construct the capsule network architecture for information hiding as described in Architecture A.
[0056] (2) Construct 10 fully connected layers as information extraction networks for 10 receivers, and connect them to the prediction vectors of Architecture A superior.
[0057] (3) Using the key {K 1 ,K 2 ,...,K 10} respectively generate 10 fully connected layer parameters of the information extraction network. After the parameters are generated, they remain unchanged during the network training process.
[0058] (4) with {M 1 , M 2 ,...,M 10} as a guide, with the goal of minimizing the loss shown in Equation (6), Architecture B is trained using the MNIST image dataset. The architecture A obtained through training i...
PUM
Login to View More Abstract
Description
Claims
Application Information
Login to View More 


