A multi-modal data fusion method based on deep learning

A technology of data fusion and deep learning, applied in the field of machine learning, can solve problems such as data fusion of various modes, and achieve the effects of facilitating subsequent processing, realizing compression, and simplifying the fusion process

Active Publication Date: 2019-09-20
FUJIAN INST OF RES ON THE STRUCTURE OF MATTER CHINESE ACAD OF SCI
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, in practical applications, a large amount of sensor data is also included, and there is no fusion of multiple modes of data such as audio, image, text, and sensor data.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A multi-modal data fusion method based on deep learning
  • A multi-modal data fusion method based on deep learning
  • A multi-modal data fusion method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0044] see figure 1 , the embodiment of the present invention provides a multi-mode data fusion method based on deep learning, the method comprising:

[0045] 101. Perform vectorization processing on the N pattern data respectively; N is a natural number, and the N pattern data includes sensor data;

[0046] In the embodiment of the present invention, N is set to be 4, that is, the four pattern data include audio data, image data and text data in addition to sensor data.

[0047] Specifically, the audio data is sparsely and vectorized, specifically:

[0048] According to the average activation of the jth hidden layer neurons get m is the number of audio data, x (i) Indicates the i-th audio data;

[0049] in, Indicates that two are represented by ρ and is the relative entropy of the mean Bernoulli distribution, ρ is the sparsity parameter, is the activation degree of neuron j in the hidden layer, and n is the number of neurons in the hidden layer; and indepen...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The present application discloses a multi-mode data fusion method based on deep learning, which includes: performing vectorization processing on N mode data respectively; N is a natural number, and N mode data includes sensor data; One mode data is modeled to obtain N single mode data; any two single mode data obtained are fused to obtain dual mode data; any two dual mode data containing the same mode data are fused to obtain any dual mode data The data is fused with the single-mode data different from the dual-mode data to obtain the three-mode data; and so on, the N-mode data is fused according to the obtained N-1 mode data to obtain the N-mode data. The application can fuse multiple modalities of data including sensor data.

Description

technical field [0001] This application relates to a multi-mode data fusion method based on deep learning, which belongs to the field of machine learning. Background technique [0002] Deep learning has become the dominant form of machine learning in computer vision, speech analysis, and many other fields. Deep learning adopts a layered structure similar to neural networks. The system consists of a multi-layer network consisting of an input layer, multiple hidden layers, and an output layer. Only nodes in adjacent layers are connected, and nodes in the same layer and cross-layers are independent of each other. connect. [0003] In the existing technology, multi-mode data fusion in deep learning mainly uses deep autoencoder to realize the fusion of audio and video modes, or uses Gauss-Bernoulli-restricted Boltzmann machine and repeated softMax-restricted Boltzmann machine to realize the fusion of data in two modes of pictures and text, or use the deep learning of deep Boltz...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62
CPCG06F18/251G06F18/25
Inventor 郭利周盛宗王开军余志刚付璐斯
Owner FUJIAN INST OF RES ON THE STRUCTURE OF MATTER CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products