Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

658 results about "Multi modal data" patented technology

Multi-modal data collection simply describes using more than one data-collection technology to accomplish a task. It can easily be argued that we have been doing multi-modal data collection for decades. Technically speaking, entering information on a keypad is a form of data collection,...

Hypercomplex deep learning methods, architectures, and apparatus for multimodal small, medium, and large-scale data representation, analysis, and applications

A method and system for creating hypercomplex representations of data includes, in one exemplary embodiment, at least one set of training data with associated labels or desired response values, transforming the data and labels into hypercomplex values, methods for defining hypercomplex graphs of functions, training algorithms to minimize the cost of an error function over the parameters in the graph, and methods for reading hierarchical data representations from the resulting graph. Another exemplary embodiment learns hierarchical representations from unlabeled data. The method and system, in another exemplary embodiment, may be employed for biometric identity verification by combining multimodal data collected using many sensors, including, data, for example, such as anatomical characteristics, behavioral characteristics, demographic indicators, artificial characteristics. In other exemplary embodiments, the system and method may learn hypercomplex function approximations in one environment and transfer the learning to other target environments. Other exemplary applications of the hypercomplex deep learning framework include: image segmentation; image quality evaluation; image steganalysis; face recognition; event embedding in natural language processing; machine translation between languages; object recognition; medical applications such as breast cancer mass classification; multispectral imaging; audio processing; color image filtering; and clothing identification.
Owner:BOARD OF RGT THE UNIV OF TEXAS SYST

Multi-modal knowledge graph construction method

PendingCN112200317ARich knowledge typeThree-dimensional knowledge typeKnowledge representationSpecial data processing applicationsFeature extractionEngineering
The invention discloses a multi-modal knowledge graph construction method, and relates to the knowledge engineering technology in the field of big data. The method is realized through the following technical scheme: firstly, extracting multi-modal data semantic features based on a multi-modal data feature representation model, constructing a pre-training model-based data feature extraction model for texts, images, audios, videos and the like, and respectively finishing single-modal data semantic feature extraction; secondly, projecting different types of data into the same vector space for representation on the basis of unsupervised graph, attribute graph, heterogeneous graph embedding and other modes, so as to realize cross-modal multi-modal knowledge representation; on the basis of the above work, two maps needing to be fused and aligned are converted into vector representation forms respectively, then based on the obtained multi-modal knowledge representation, the mapping relation of entity pairs between knowledge maps is learned according to priori alignment data, multi-modal knowledge fusion disambiguation is completed, decoding and mapping to corresponding nodes in the knowledge maps are completed, and a fused new atlas, entities and attributes thereof are generated.
Owner:10TH RES INST OF CETC

Micro-blog emotion prediction method based on weak supervised type multi-modal deep learning

InactiveCN108108849AImprove the effect of sentiment classificationSolving Multimodal Discriminative RepresentationsWeb data indexingForecastingMicrobloggingPredictive methods
The invention discloses a micro-blog emotion prediction method based on weak supervised type multi-modal deep learning and relates to the field of multi-modal emotion analysis. The method comprises the following steps of preprocessing micro-blog multi-modal data; carrying out the weak supervised training of a multi-modal deep learning model; and carrying out the micro-blog emotion prediction of the multi-modal deep learning model. The method solves the problems of the multi-modal discriminant expression and the data label limitation existing in the emotion prediction of the micro-blog multi-channel content in the prior art, and realizes the final multi-modal emotion class prediction, wherein the accuracy is adopted as the experiment evaluation standard. The consistency degree between the predicted micro-blog emotion polarity category and the pre-marked emotion category is reflected. The performance of the method is greatly improved and the correlation among multiple modals is considered. As a result, an optimal effect is achieved in the aspect of the overall multi-modal performance. An ideal classification effect is achieved for different emotion categories. Through the weak supervised training, an initial model for text and image modals is obviously improved in the aspect of emotion classification effect.
Owner:XIAMEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products