Data augmentation method based on RBSAGAN

A technology of data and quantity, applied in the field of using deep learning method to generate EEG signals of motor imagery, can solve the problems of missing features, limited feature information of EEG signals, not making full use of signal timing features, etc. Characteristic information is not comprehensive, and the effect of change is realized

Pending Publication Date: 2021-04-16
BEIJING UNIV OF TECH
0 Cites 0 Cited by

AI-Extracted Technical Summary

Problems solved by technology

How to make the generated data have the key features contained in EEG is very important, and the existing EEG signal augmentation methods fail to capture the relationship between the data at each discrete moment and the global information, and do not make full use of the timing characteristics of the signal, ...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Abstract

The invention discloses an electroencephalogram signal data augmentation method based on RBSAGAN, and the method comprises the steps: designing Up ResBlock and Down ResBlock network structures, extracting the features under different scale receptive fields through two 1D convolution layers of a trunk and one 1D convolution layer of a branch, and respectively employing a 1D deconvolution layer and an average pooling layer for the enlargement and reduction of data dimensions; and designing a 1D Self-Attention network on the basis of a Self-Attention mechanism. The network structure does not consider the distance between the data at each discrete moment, can directly obtain the global time sequence characteristics by calculating the similarity between the data at each discrete moment in parallel, and is suitable for electroencephalogram signals with rich time sequence information. A discriminator of the RBSAGAN is formed by networks such as Down ResBlock, 1D Selection and the like, and a loss value is output to update parameters of a generator and the discriminator until Nash equilibrium is achieved. The new data generated by the generator and the original data form an augmented data set, and the augmented data set is input into the 1D CNN for classification so as to evaluate the quality of the generated data.

Application Domain

Technology Topic

DeconvolutionData set +11

Image

  • Data augmentation method based on RBSAGAN
  • Data augmentation method based on RBSAGAN
  • Data augmentation method based on RBSAGAN

Examples

  • Experimental program(1)

Example Embodiment

[0028]The experiment of the present invention is conducted in the following hardware environment: 14 kernel Intel Xeon E5-26832.00Hz CPU and 8GB memory GeForce 1070GPU. All neural networks are implemented using a Pytorch framework.
[0029]The data used in the present invention is "BCI Competition IV 2A" public data set. The electromal signal is 9 -20-conducting cap acquisition of 250 Hz through a specification of 10-20 system. 9 subjects perform four categories of motion imagination tasks: left hand, right hand, foot, tongue. Each subject for two-day experiment, containing 288 groups of experiments per day, a total of 576 groups of experiments. The electromal signal is filtered through a band pass filter and 50 Hz notch filter by 0.5 Hz to 100 Hz. Each experiment appears at 2S, the direction of the arrow is left, right, upper or lower (corresponding to the four types of tasks left hand, right hand, tongue or foot), and maintains 1.25s, the subject is in the direction of the arrow display according to the screen Perform a corresponding motion imagination task, 6S take a rest.
[0030]The following description will be described in detail with reference to the accompanying drawings.
[0031]Step 1 EEG Signal Preproval
[0032]The original brain data dimension is 576 × 1000 × 22, a total of 576 groups of experiments, and each group of experiments are collected by 22 leads, including 1000 sample points. Use the four-order Badworth band pass filter of 8-30 Hz to pass through the electroencephalite signals, and normalize the data;
[0033]Step 2 Rbsagan network
[0034]STEP 2.1 RBSAGAN's schematicfigure 1 As shown, the network is mainly composed of US and DS, and the DS distinguishes the real and generated input data. US is input as an input as a noise vector, and attempts to generate false data that is not identified by DS as a fake, and updates the parameters of US and DS by continuous training.
[0035]Rbsagan's US structurefigure 2 (a), the sample of the dimension of 64 is used as the input of the US, and the input is connected to the full connection layer of 12800, and the dimension is converted to 100 × 128, and the RESHAPE operation is converted to 100 × 128. The data is sampled by two UP RESBLOCK networks, and the 1D Self-Attention network makes the US can build the contact between the time samples, and finally the data dimension of the output is the same as the EEG signal by 1d convolution. Upresblock structureimage 3 (a), the trunk of the UP RESBLOCK structure includes a BN layer, a 1D reinvisor layer, a 1D convolution layer, a BN layer, and a 1D convolution layer, and the branch is composed of a 1D inverse layer and a 1D convolution layer. The convolutionary nuclear dimension of the 1D convolution layer included in the two UP RESBLOCK networks is 3, the step size is 1; the reverse core of the 1D reinvisor layer is 7, and the steps are 5 and 2, respectively; Its activation function is the preset; the number of feature points is 64 and 32, respectively. The size of the output data is 64 × 500 and 32 × 1003.1D Self-Attention network, such asFigure 4 As shown, weighted all the data, so that the US can construct the connection between the time. The volume of the volume is 1, the step size is 1, and the number of features is 4, 4 and 32 from left to right, and the output feature vector f-turning and g perform matrix multiplication, and then activates functions Softmax gets a focused chart, and it is taken to perform a matrix multiplication with H to obtain a self-focus feature, and then multiplied by a zoom factor, adding the data input to the structure, and the final output is 32 × 1003. Finally, the dimension of the output is 2 × 1000 with the electromal signal by 1D volume, and the volume core size is 4, the step length is 1, and the number of feature is 22.
[0036]Rbsagan's DS structurefigure 2 (b), real data and generation data are used as input, and then pass through the 1D convolution layer, two Down Resblock networks, 1D Self-Attention networks, and two full connectivity layers. The convolutionary nuclear size of the 1D convolution layer is 3, the step size is 1, the number of features is 16, the dimension of the output data is 16 × 1000. Down Resblock networkimage 3 (b), the trunk consists of two 1D convolution layers and 1d average cellularization layers, and the branch consists of a 1D convolution layer and a 1D average cellification layer. The convolutionary nuclear dimension of the 1D convolution layer of the two Down Resblock structures is 3, and the step size is 1; the size of the 1D is 2, the step size is 5 and 2, respectively, respectively, respectively. 64 and 128, the dimensions of the output data are 64 × 200 and 128 × 100, respectively; the activation functions are Leaky Relu. The dimensions of the subsequent two full-connection layers are 128 and 1, respectively, the activation function is Leaky RELU, and the final output is based on the basis for the optimization of US and DS parameters;
[0037]Step 2.2 Rbsagan optimizer adopts ADAM, the initial learning rate is 0.0001, momentum β1Β20.1 and 0.999 respectively. According to the result of the DS output, the loss value is obtained, and the network parameters of US and DS are optimized by reverse propagation. In terms of losses, the loss function used by Rbsagan is the same as WGAN. In order to keep the DS and USA can be in a more balanced state in the training, US per train will train 5 times. The number of network training is 100 times, BATCH SIZE is 10, and each category generates new data through RBSAGAN;
[0038]STEP 3 evaluation generated data quality
[0039]The generated data combines existing data as a 1D CNN data set, design 1D CNN such asFigure 5 It consists of a 1D convolution layer, a BN layer, a maximum cell layer, and three full-connection layers. The volume of the convolution layer is 5 steps of 5 steps, the number of features is 16; the size of the maximum cell layer is 2, the step size is 1; the dimensions of the three full-connection layers are 600, 60 and 4. The activation function is RELU. Dropout is added between the full connectivity layer to alleviate the prediments and finally output the probability of various categories. The experimental results are as shown in the table below.
[0040]Table 1 Classification of each subject
[0041]
[0042]
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Similar technology patents

Inter-assembly joint structure of photovoltaic string and photovoltaic string structure

PendingCN113114103AParallel fast implementationAchieve changePhotovoltaicsTwo-part coupling devicesPhysicsMC4 connector
Owner:HUANENG CLEAN ENERGY RES INST +1

Wristband type laser therapy instrument

PendingCN109157759AAchieve changeEnable connectivityLight therapyWristTreatment effect
Owner:ZHENGAN BEIJING MEDICAL EQUIP

Anode phosphorus copper with variable area and application method thereof

InactiveCN104694999ARealized areaAchieve changeElectrodesCopperArea change
Owner:东又悦(苏州)电子科技新材料有限公司

Base with adjustable inclination angle and projection device

ActiveCN113154190ARealize image projection angleAchieve changeProjectorsStands/trestlesEngineeringProjection angle
Owner:QISDA OPTRONICS (SUZHOU) CO LTD +1

Classification and recommendation of technical efficacy words

Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products