Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Depth feature representation method based on multiple stacked auto-encoding

A technology of stacked self-encoding and deep features, which is applied in the field of deep feature representation based on multiple stacked self-encodings, which can solve the problem that the deep architecture can only extract single-layer structures.

Inactive Publication Date: 2017-09-22
WUHAN UNIV
View PDF0 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Aiming at the inability to intuitively feel the complex results of feature extraction and the inadequacy of the depth architecture that can only extract a single-layer structure, the present invention proposes a feasible method for accurately extracting the features of the target image, that is, trying to imitate the human brain's visual Cortex, combined with the multi-level features of the image representation to achieve the characteristics of the target image

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Depth feature representation method based on multiple stacked auto-encoding
  • Depth feature representation method based on multiple stacked auto-encoding
  • Depth feature representation method based on multiple stacked auto-encoding

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] First, explain the basic principles of stacked autoencoders. An autoencoder requires an input x=R d and the mapping h ∈ R of the input to the first latent representation d′ , using a deterministic function h=f θ =σ(W x +b), parameter θ={W,b}. Then use this way to reset the input by reverse mapping: y=f θ′ (h)=σ(W'h+b'), θ'={W', b'}. The two parameters are usually W'=W T is restricted in the form of encoding the input and decoding the latent representation y i use the same weights. The parameters will pass through the training set D n ={(x 0 , t 0 ),...(x n , t n )} to minimize an appropriate value function to be optimized.

[0039] First, build multiple multi-level autoencoders. This process is completely unsupervised, imitating the cognitive ability of the human brain, and realizing the process from rough to fine by combining features at different levels. Frameworks combine multiple autocoders, each of which has a different structure. A network with few...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a depth feature representation method based on multiple stacked auto-encoding. Feature expressions of different hierarchical structures of a target object are acquired through constructing stacked auto-encoding networks of different structures. A shallow-layer (the number of hidden layers is smaller) neural network structure is constructed, a back-propagation method is adopted to train network parameters to enable a neural network to achieve the optimal structure, and outputs, namely the feature expressions, of a second layer of the network are acquired; more deeply hierarchical network structures are respectively established, network parameters are trained according to a similar manner, and thus outputs (the feature expressions) of corresponding layers are acquired; and fusion and selection are carried out on the above-mentioned obtained features according to a manner of feature combination and selection to acquire a hierarchical feature representation that characterizes a target. Therefore, corresponding visual tasks (image classification, identification and detection) are carried out.

Description

technical field [0001] The invention relates to a deep feature representation method, in particular to a deep feature representation method based on multiple stacked self-encoding. Background technique [0002] A key issue in computer vision and multimedia applications is how to construct a feature with strong discrimination and robustness. In the field of traditional video analysis and image processing, there are many commonly used low-level visual features (such as color, texture, SIFT, HOG, LBP, etc.), and good results have been achieved in some visual tasks. Of course, these features still have some limitations: First, the extraction process of such features is artificially designed, and there is a certain algorithm complexity, which is generally suitable for smaller data sets. However, almost all video images currently used are over ten Therefore, it is difficult to meet the needs of feature extraction in current big data. Secondly, due to the influence of realistic a...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/66G06N3/08
CPCG06N3/084G06N3/088G06V30/194
Inventor 胡瑞敏熊明福陈军沈厚明梁超陈金徐东曙郑淇
Owner WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products