Unlock instant, AI-driven research and patent intelligence for your innovation.

Layer increase and decrease deep learning neural network training method, system, medium and equipment

A neural network training and deep learning technology, applied in the field of increasing or decreasing the number of layers, deep learning neural network training, and deep learning neural network training, which can solve the problem of inability to achieve fitting, insufficient fitting, and damage to hidden layer cognitive weights and generation. weights etc.

Active Publication Date: 2021-06-22
SUPERPOWER INNOVATION INTELLIGENT TECH DONGGUAN CO LTD
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] However, the top-down supervised training of existing deep learning techniques either only adjusts the network weights between the output layer and the hidden layer, or adjusts the network weights of all layers
When there are more categories of top-level concepts than labels, if only the weights of the classifier network between the output layer and the hidden layer are adjusted, and if the network structure of the classifier is relatively simple, the result of repeatedly adjusting the network parameters of the classifier is often in line with This output label cannot match that output label, which means that a sufficient fit cannot be achieved
If the network structure of the classifier is designed to be very complex, such as using a hierarchically complex BP neural network as the classifier, then there will be over-fitting, which will cause some key features to be discarded during the fitting, so that the sample The classification result is completely correct, but it will be found to be wrong when it is applied
[0004] It can be seen that only supervised training on the level between the output layer and the hidden layer will either fail to fully fit or overfit, which will cause deep learning to fail when applied.
If the network weights of all layers are adjusted, the cognitive weights and generation weights in the hidden layer will be destroyed, so that the concepts and scenes obtained after adjustment are no longer completely derived from the characteristics and scenes of the input data, but for the output labels The features and scenes that are distorted by the needs of the system will also appear overfitting, so that the classification result is completely correct for the sample, but it will be found to be wrong when it is applied.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Layer increase and decrease deep learning neural network training method, system, medium and equipment
  • Layer increase and decrease deep learning neural network training method, system, medium and equipment
  • Layer increase and decrease deep learning neural network training method, system, medium and equipment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0052] The description and establishment process of deep learning is as follows:

[0053] The calculations involved in producing an output from an input can be represented by a flow graph: a flow graph is a graph that can represent calculations, in which each node represents a basic calculation and a calculation The value, the result of the calculation is applied to the value of this node's child nodes. Consider a collection of computations that are allowed at every node and possible graph structure, and define a family of functions. Input nodes have no parents, and output nodes have no children.

[0054] A special property of such flow graphs is depth: the length of the longest path from an input to an output.

[0055] Considering the learning structure as a network, the core idea of ​​deep learning is as follows:

[0056] Step 1: Adopt bottom-up unsupervised training

[0057] 1) Construct a single layer of neurons layer by layer.

[0058] 2) Each layer is tuned using th...

Embodiment 2

[0095] Such as Figure 5 As shown, this embodiment provides a deep learning neural network training system for increasing or decreasing the number of layers, the system includes a training module 501, a first input module 502, a first judgment module 503, a hidden layer increase module 504, and a second input module 505, the second judgment module 506, the hidden layer deletion module 507 and the output module 508, the specific functions of each module are as follows:

[0096] The training module 501 is used to train the current deep learning neural network through samples; wherein, the current deep learning neural network includes an input layer, a hidden layer, a classifier and an output layer.

[0097] The first input module 502 is configured to input training input data into the current deep learning neural network, and obtain first output data through calculation of the current deep learning neural network.

[0098] The first judging module 503 is configured to judge whe...

Embodiment 3

[0107] This embodiment provides a storage medium, the storage medium stores one or more programs, and when the programs are executed by the processor, the method for increasing or decreasing the number of layers in the above-mentioned embodiment 1 and deep learning neural network training is implemented, as follows:

[0108] Train the current deep learning neural network by samples; Wherein, the current deep learning neural network includes an input layer, a hidden layer, a classifier and an output layer;

[0109] Inputting the training input data into the current deep learning neural network, and calculating the first output data through the current deep learning neural network;

[0110] judging whether the expected output data corresponding to the first output data and the training input data are the same;

[0111] When the number of expected output data corresponding to the first output data and the training input data is not the same as the first preset condition, a hidden...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a training method, system, medium and equipment for a deep learning neural network with layer number increase or decrease. The method includes: inputting training input data into the current deep learning neural network, and obtaining the first output through the calculation of the current deep learning neural network data; judge whether the first output data is the same as the expected output data; if it does not meet the first preset condition, add a hidden layer before the classifier in the current deep learning neural network; otherwise, input the test input data into the current deep learning neural network Network, the second output data is obtained through the calculation of the deep learning neural network; it is judged whether the second output data is the same as the real result data; if the second preset condition is not met, the previous hidden layer of the classifier in the current deep learning neural network is Delete; otherwise, output the current deep learning neural network. When the present invention can achieve sufficient fitting, the top-level concept is just enough to fully fit the output data.

Description

technical field [0001] The present invention relates to a deep learning neural network training method, in particular to a layer increase and decrease deep learning neural network training method, system, medium and equipment, belonging to the field of neural network training. Background technique [0002] Existing deep learning technology can obtain output labels through input data (such as obtaining the ID card number of the person through the avatar, or obtaining the ID card number of the person through voice). Supervised training (such as avatars with ID number labels, and speech with ID number labels). [0003] However, the top-down supervised training of existing deep learning techniques either only adjusts the network weights between the output layer and the hidden layer, or adjusts the network weights of all layers. When there are more categories of top-level concepts than labels, if only the weights of the classifier network between the output layer and the hidden ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06N3/08G06N3/04
CPCG06N3/08G06N3/045
Inventor 朱定局
Owner SUPERPOWER INNOVATION INTELLIGENT TECH DONGGUAN CO LTD
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More