Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Time sequence classification method, system, medium and device based on multi-representation learning

A time series and classification method technology, applied in the field of time series data mining, can solve the problems of ignoring the potential contribution of deep learning to representation learning, lack of adaptive understanding and representation of diverse time series features, and inability to improve classification accuracy, so as to improve classification accuracy , to achieve the effect of classification interpretability

Active Publication Date: 2021-06-08
SHANDONG UNIV
View PDF4 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The limitations of the existing TSC methods are: on the one hand, the timing feature extraction strategy is combined with a variety of traditional classification algorithms, and then the voting mechanism is used to improve the classification accuracy to a certain extent, while ignoring the potential contribution of deep learning to representation learning; On the other hand, the existing TSC method can only simply use a single deep learning model (convolutional neural network, bidirectional long-short-term memory network) to learn general representations of time series and complete classification, lacking the adaptive understanding and understanding of various time series features. representation, so it may not be able to effectively improve the classification accuracy
Therefore, it is equally challenging to provide an interpretable basis for classification results

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Time sequence classification method, system, medium and device based on multi-representation learning
  • Time sequence classification method, system, medium and device based on multi-representation learning
  • Time sequence classification method, system, medium and device based on multi-representation learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0086] A time series classification method based on multi-representation learning, such as figure 1 As shown, time series classification is realized by constructing a Multi-representation Learning Networks (MLN) model. The MLN model first uses a variety of representation strategies to comprehensively understand the time series features; secondly, based on the residual network and the bidirectional long-short-term memory network, the multi-representation is effectively fused to achieve representation enhancement; finally, the multi-layer perceptron network is used to realize time series. Classification, and based on the attention mechanism, gives an interpretable basis for the classification results. The specific steps are as follows:

[0087] (1) Multi-feature encoding for a given time series based on different time series representation strategies;

[0088] (2) Using the residual network and the bidirectional long-short-term memory network to achieve representation fusion a...

Embodiment 2

[0092] A time series classification method based on multi-representation learning according to Embodiment 1, the difference is:

[0093] like figure 1 As shown, in step (1), the specific implementation steps of multi-feature encoding include:

[0094] A. Based on the Piecewise Linear Representation (PLR) strategy for the i-th time series T of the time series data set i Perform feature encoding, the time series data set includes N time series, 1≤i≤N, and obtain the first encoding sequence

[0095] B. Time series T based on PiecewiseAggregate Approximation (PAA) strategy i Perform feature encoding to obtain the second encoding sequence

[0096] C. Using one-dimensional time series convolution operation, the first coded sequence obtained in step A The second coding sequence obtained in step B and time series T i Perform data characterization to obtain basic characterization sequences

[0097] The specific implementation process of step C is:

[0098] The first enc...

Embodiment 3

[0104] A time series classification method based on multi-representation learning according to Embodiment 1, the difference is:

[0105] In step (2), the specific implementation steps of characterization fusion and enhancement include:

[0106] D. Perform an element-wise multiplication operation (element-wise multiplication) on the basic representation sequence obtained in step C, and perform representation fusion;

[0107] E. Input the fused characterization sequence obtained in step D into the residual network and the bidirectional long-short-term memory network respectively to deeply understand the temporal characteristics; re-process the characterization sequence processed by the residual network and the bidirectional long-short-term memory network Merge to achieve representation fusion and enhancement.

[0108] In step D, the element-level multiplication operation is shown in formula (IV) and formula (V):

[0109]

[0110]

[0111] In formula (IV) and formula (V),...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a time sequence classification method, system, medium and device based on multi-representation learning. The method comprises the following steps: (1) carrying out the multi-feature coding of a given time sequence based on different time sequence representation strategies; (2) realizing representation fusion and enhancement by using a residual network and a bidirectional long-short-term memory network; and (3) completing classification by using a multi-layer perceptron network, and realizing classification interpretability by using an attention mechanism. According to the method, a multi-channel time sequence representation learning model is constructed, so that the time sequence characteristics can be comprehensively understood based on various representation strategies. According to the representation fusion model based on the residual network and the bidirectional long and short term memory network, multi-view representation can be effectively fused and representation enhancement can be realized, so that the classification precision is effectively improved. According to the method, the important time sequence characteristics of the time sequence can be effectively recognized on the basis of the attention mechanism, namely, the interpretability basis of the classification result can be provided, and the classification interpretability is realized.

Description

technical field [0001] The invention belongs to the technical field of time series data mining, and in particular relates to a time series classification method, system, medium and equipment based on multi-representation learning. Background technique [0002] With the rapid development of technologies such as the Internet, cloud computing, and big data processing, we have entered the era of big data where everything is connected. Among the massive heterogeneous data, there is a kind of time-related time series data, which is widely used in almost all application fields in the real world, and is called "time series" for short. Time series can not only reflect the specific data characteristics at a certain moment, but also reveal the continuous change over time, the change trend and potential knowledge of data entities. Time series often have big data characteristics such as "massive", "high-dimensional", and "continuously generated", and it is very challenging to study them...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/2458G06F16/26G06F16/28G06N3/04G06N3/08
CPCG06F16/2474G06F16/26G06F16/287G06N3/049G06N3/08
Inventor 王少鲲胡宇鹏李学庆曲磊钢李振展鹏
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products