Multi-modal data knowledge information extraction method based on deep-width joint neural network

A technology that combines neural and information extraction, applied in biological neural network models, neural learning methods, digital data information retrieval, etc. Achieve the effect of improving robustness, low dimensionality, and strong robustness

Pending Publication Date: 2021-09-07
SOUTH CHINA UNIV OF TECH
View PDF4 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] At present, knowledge feature extraction methods for multi-modal data mainly use machine learning or deep learning methods to achieve the ability to process and understand multi-source modal information, but existing methods are often unable to adaptively realize multiple modal data features Effective Fusion Between (Li Huifang, Zhao Leilei, Hu Guangzheng. An Intelligent Fault Diagnosis Method Based on Multimodal Fusion Deep Learning, 2018.) (Zhong Chongliang. A Multimodal Feature Fusion Based on Convolutional Neural Network Methods and devices, 2019.)
Multimodal learning has gone through multiple development stages and has now fully entered the use of deep learning as the main means of knowledge extraction. However, traditional deep learning methods are time-consuming and laborious, especially when applied in the field of multimodal data. Powerful computing resources are often required. Difficult to meet the needs of industry and academia

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal data knowledge information extraction method based on deep-width joint neural network
  • Multi-modal data knowledge information extraction method based on deep-width joint neural network
  • Multi-modal data knowledge information extraction method based on deep-width joint neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0057] The present invention will be further described below in conjunction with specific examples.

[0058] Such as figure 1 As shown, the multimodal data knowledge information extraction method based on the deep-width joint neural network provided by this embodiment includes the following steps:

[0059] 1) Collect the multi-modal data logs generated by the intelligent manufacturing factory system in the daily assembly line, including voice, text, image and other types of multi-modal data, and preprocess the data, add the log samples to Kafka as the In the distributed log system implemented on the basis, since a large number of samples are processed, the processed data samples are stored in the storage module of the Hadoop distributed file system;

[0060] Preprocessing of data logs produced by smart manufacturing factories mainly includes the following operations: Data noise filtering and processing of missing values ​​of data features. The processing of missing values ​​o...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a multi-modal data knowledge information extraction method based on a deep-width joint neural network, and the method comprises the steps: 1) collecting multi-modal data generated by the production of an intelligent manufacturing factory, carrying out the data cleaning preprocessing, and storing the data in a Hadoop distributed file system; 2) performing subdivision table establishment on original data log records stored in the HDFS according to modal properties, processing multi-modal data into single-modal data features, including single-modal data feature tables such as voice, texts and images, and storing the single-modal data features in an HIVE database; and 3) performing feature extraction on the multi-modal data feature table by using the deep-width joint network to obtain corresponding high-level abstract feature knowledge, thereby realizing extraction of multi-modal data knowledge information by the deep-width joint network.

Description

technical field [0001] The present invention relates to technical fields such as deep learning, width learning and multimodal data feature extraction, and in particular refers to a multimodal data knowledge information extraction method based on a deep-width joint neural network. Background technique [0002] With the rapid development of Internet technology and the continuous in-depth transformation of the digital industrial chain, the era of big data has emerged as the times require. Cloud computing, artificial intelligence and other technologies are growing rapidly, and a digital ecological society with big data as the core has been established. Multimodal data from all aspects of the real world are difficult to analyze effectively with the current technical level. The difficulty of processing massive data is also greatly increased. In order to solve these problems, a new method of data analysis and processing is urgently needed. Using the latest AI technology fusion t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06N3/04G06N3/08G06F16/182
CPCG06N3/084G06N3/088G06F16/182G06N3/047G06N3/048G06N3/045G06F18/2155G06F18/2415G06F18/10G06F18/253
Inventor 刘雨晨余志文杨楷翔施一帆陈俊龙
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products