Method for constructing incremental LSTM by utilizing training process compression and memory consolidation
An incremental, memory technology, applied in neural learning methods, neural architectures, biological neural network models, etc., can solve problems such as reducing storage data space overhead, reduce huge space overhead, improve training efficiency, and ensure practicality. Effect
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment Construction
[0021] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.
[0022] Such as figure 1 The shown method for constructing an incremental LSTM using training process compression and memory consolidation includes the following steps:
[0023] Step 1. In order to adapt to incremental learning, the sequence data is divided into several batches, and the incremental training of LSTM is completed batch by batch to reduce training overhead. The specific process is: the data preparation module divides the sequence data S into N subsequence data sets {S 1 ,S 2 ,S 3 ,...,S N}, where the i-th subsequence data set represents For the i-th subsequence data set S i T...
PUM
Abstract
Description
Claims
Application Information
- R&D Engineer
- R&D Manager
- IP Professional
- Industry Leading Data Capabilities
- Powerful AI technology
- Patent DNA Extraction
Browse by: Latest US Patents, China's latest patents, Technical Efficacy Thesaurus, Application Domain, Technology Topic, Popular Technical Reports.
© 2024 PatSnap. All rights reserved.Legal|Privacy policy|Modern Slavery Act Transparency Statement|Sitemap|About US| Contact US: help@patsnap.com