Multi-task learning method and device
A multi-task learning and task technology, applied in neural learning methods, neural architectures, biological neural network models, etc., can solve problems such as wireless communication and multi-task learning implementation solutions that are not given
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment 1
[0126] Embodiment one, see Figure 8 The neural network architecture shown, lossless compression and joint training of 3 encoders.
[0127] The input of the neural network: source X and source Y.
[0128] Training constraints: R 0 +R 1 +R 2 =H(X,Y),R 0 is the coding rate for source X and source Y, R 1 is the coding rate of the source X, R 2 is the coding rate of source Y, H(X, Y) is the entropy of source X and source Y, and this constraint can be regarded as a newly designed objective function.
[0129] The goal of training: Losslessly reconstruct source X and source Y.
[0130] The training process includes the following steps:
[0131] Step 1.1: Divide the data of source X and source Y into two parts, one part of data is used as training data, and the other part of data is used as test data.
[0132] For example, the information source X may be the information source of task 1, and the information source Y may be the information source of task 2.
[0133] The train...
Embodiment 2
[0141]Embodiment 2, lossless compression and three encoders are trained separately.
[0142] The input of the neural network: source X and source Y.
[0143] Training constraints, according to different needs, different constraints can be set during the separate training of the encoder:
Embodiment 21
[0144] Example 2.1, R 0 =H(X,Y),R 1 = R 2 = 0, at this time, encoder 0 is trained, and encoder 1 and encoder 2 (that is, private encoders) are not trained.
PUM
Login to View More Abstract
Description
Claims
Application Information
Login to View More 


