Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

An Incrementally Stacked Width Learning System with Deep Architecture

A learning system and stacking technology, applied in the direction of neural learning methods, neural architecture, complex mathematical operations, etc., can solve the problems of limited generalization ability and not very good performance, and achieve long-term training consumption, increased complexity, and deepened The effect of network structure

Active Publication Date: 2022-04-22
SOUTH CHINA UNIV OF TECH
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] Although there are many variants of the width learning system, which can meet the needs of different tasks, the generalization ability of the models of these variants is still limited, and it can only target specific tasks. When set, the performance is not very good

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An Incrementally Stacked Width Learning System with Deep Architecture
  • An Incrementally Stacked Width Learning System with Deep Architecture
  • An Incrementally Stacked Width Learning System with Deep Architecture

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0048] This embodiment has a stacked width learning system with a deep structure, such as figure 1 As shown, it is composed of a plurality of breadth learning system modules, which may be original breadth learning system units. The original width learning system unit includes feature nodes, feature node weight layers, enhancement nodes, and enhancement node weight layers. Assuming that a width learning system has n groups of feature nodes and m groups of enhanced nodes, the approximate result of the network output can be expressed as:

[0049] Y=[Z n , H m ]W m

[0050] =[Z 1 ,Z 2 ,...,Z n , H 1 , H 2 ,...,H m ]W m

[0051] =[Z 1 ,Z 2 ,...,Z n ]W E +[H 1 , H 2 ,...,H m ]W m

[0052] Among them, Z n Represents n groups of feature nodes, H m Indicates m groups of enhanced nodes, is the feature node weight layer and the enhancement node weight layer. make is a generalized function of feature nodes, such as the set of n groups of feature nodes, let As...

Embodiment 2

[0099] In practical applications, we need to adjust the number of nodes in the stacked network to achieve the best performance of the model. For most deep structure models, if the nodes in the network are added, the network needs to be trained from scratch, and all parameters in the network need to be updated again, which is time-consuming and labor-intensive. The incremental stacking width learning system proposed in this patent can not only perform incremental learning in the width direction, but also realize incremental learning in the depth direction. Incremental learning offers new approaches.

[0100] (1) Incremental learning in the width direction

[0101] In each width learning system module of the incremental stacking width learning system, we can dynamically add feature nodes and enhancement nodes to increase the width of the network, and the weight matrix of the newly added nodes can be calculated separately without affecting the previous The weight matrix of the ...

Embodiment 3

[0142] In the stacked width learning system with a deep structure in this embodiment, the width learning module adopts various variant structures of the width learning system; various variant structures of the width learning system include but are not limited to cascade width learning systems (CascadedBLS), loop width Learning system and gated width learning system (Recurrent and Gated BLS), convolutional width learning system (Convolutional BLS), etc., each width learning module can flexibly select a model according to the complexity of the task, a three-layer structure using width learning A stacked width learning system with a variant structure such as Figure 4 shown.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an incrementally stacked width learning system with a deep structure, which is characterized in that: it includes n width learning system modules; n width learning system modules are stacked through residual connections; the i‑1th width learning The output of the system module will be used as the input of the i-th width learning system module, and the expected output of the i-th width learning system module is the residual error of the 1st,...,i‑1 width learning system module, i≤n; system The final output of is the sum of the outputs of n width learning system modules. The system retains the high-efficiency and fast advantages of the width learning system, and at the same time stacks multiple width learning system modules through residual connections, thereby increasing the depth of the network and making the network have a strong learning ability.

Description

technical field [0001] The invention relates to the technical field of width learning, and more specifically, to an incrementally stacked width learning system with a deep structure. Background technique [0002] With the development of artificial intelligence technology, in order to meet the huge demand for processing large-scale data, many machine learning algorithms have been proposed. However, traditional machine learning algorithms rely on feature expression, and good features play a key role in the performance of the algorithm. , so the use of machine learning algorithms requires complex feature extraction work, which has certain limitations. The proposal of deep learning solves this problem. The deep learning network can automatically learn high-dimensional abstract features from the data by superimposing the depth of the network. Therefore, deep learning has made key breakthroughs in many fields. [0003] Although the deep network structure can make the network have...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06N3/08G06N3/04G06F17/16
CPCG06N3/08G06F17/16G06N3/045
Inventor 陈俊龙刘竹琳贾雪叶汉云冯绮颖张通
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products