Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Deep convex network with joint use of nonlinear random projection, restricted boltzmann machine and batch-based parallelizable optimization

A non-linear, network technology, applied in biological neural network models, computer components, speech analysis, etc., can solve problems such as long learning and difficult parallelization

Active Publication Date: 2012-10-17
MICROSOFT TECH LICENSING LLC
View PDF8 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although this learning algorithm has been shown to be powerful when combined with fine-tuning the weights assigned to the DBN, such a learning algorithm is extremely difficult to parallelize across machines, making learning somewhat tedious

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep convex network with joint use of nonlinear random projection, restricted boltzmann machine and batch-based parallelizable optimization
  • Deep convex network with joint use of nonlinear random projection, restricted boltzmann machine and batch-based parallelizable optimization
  • Deep convex network with joint use of nonlinear random projection, restricted boltzmann machine and batch-based parallelizable optimization

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0020] Various technologies related to the deep convex network (DCN) will now be described with reference to the drawings, and the same reference numerals denote the same elements in all the drawings. In addition, several functional block diagrams of various example systems are shown and described herein for the purpose of explanation; however, it is understood that the functions described as being performed by a particular system component may be performed by multiple components. Similarly, for example, a component may be configured to perform a function described as performed by multiple components, and some steps in the method described herein may be omitted, reordered, or combined.

[0021] reference figure 1 , An exemplary DCN 100 is shown, where DCN (after training) can be used in conjunction with performing automatic classification / recognition. According to an example, DCN 100 may be used to perform automatic speech recognition (ASR). In another example, DCN 100 may be us...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A method is disclosed herein that includes an act of causing a processor to access a deep-structured, layered or hierarchical model, called deep convex network, retained in a computer-readable medium, wherein the deep-structured model comprises a plurality of layers with weights assigned thereto. This layered model can produce the output serving as the scores to combine with transition probabilities between states in a hidden Markov model and language model scores to form a full speech recognizer. The method makes joint use of nonlinear random projections and RBM weights, and it stacks a lower module's output with the raw data to establish its immediately higher module. Batch-based, convex optimization is performed to learn a portion of the deep convex network's weights, rendering it appropriate for parallel computation to accomplish the training. The method can further include the act of jointly substantially optimizing the weights, the transition probabilities, and the language model scores of the deep-structured model using the optimization criterion based on a sequence rather than a set of unrelated frames.

Description

Technical field [0001] The present invention relates to technology related to automatic classification. Background technique [0002] Speech recognition has been a subject of extensive research and commercial development. For example, voice recognition systems have been incorporated into mobile phones, desktop computers, automobiles, etc. to provide specific responses to voice input provided by users. For example, in a mobile phone equipped with voice recognition technology, the user can speak the name of a contact listed in the mobile phone, and the mobile phone can initiate a call to the contact. In addition, many companies are currently using voice recognition technology to help customers in identifying company employees, identifying product or service issues, and so on. [0003] Partly motivated by the requirement to utilize certain similar attributes in human speech generation and perception systems, research on Automatic Speech Recognition (ASR) has pioneered a layered arch...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/08
CPCG06K9/00G06N3/0454G06N3/08G10L17/18G10L15/16G06N3/045G06N3/04G06N3/02
Inventor L·邓D·余A·埃西罗
Owner MICROSOFT TECH LICENSING LLC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products