Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method and device for extraction of user portrait labels for cold chain stowage based on multimodality

A multi-modal, cold-chain technology, applied in character and pattern recognition, biological neural network models, special data processing applications, etc., can solve the problems of ignoring the semantic differences of different modal features, difficult features, and limited research.

Active Publication Date: 2022-07-29
芽米科技(广州)有限公司
View PDF15 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In the existing user portrait modeling technology, the research on how to integrate multiple data sources or modalities to obtain a more accurate user portrait is quite limited, and there are the following deficiencies: 1. Some user portrait research work only focuses on a single 2. The simple and easy integration method is to directly splice the feature vectors of various modalities when information is input, but this method ignores the semantic differences of different modal features. The interaction relationship between different modes has not been established, and there is a large noise interference

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and device for extraction of user portrait labels for cold chain stowage based on multimodality
  • Method and device for extraction of user portrait labels for cold chain stowage based on multimodality
  • Method and device for extraction of user portrait labels for cold chain stowage based on multimodality

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0067] The present invention will be further described below in conjunction with the accompanying drawings. The following examples are only used to illustrate the technical solutions of the present invention more clearly, and cannot be used to limit the protection scope of the present invention.

[0068] The invention discloses a method and a device for extracting user portrait labels for cold chain stowage based on multimodality.

[0069] Step 1: Input the original cold chain stowage data set D1, deduplicate and empty the data set, and obtain sample set D2 after cleaning:

[0070] Step 1.1: Define Data as a single data to be cleaned, define id and content as the serial number and content of the data, and satisfy the relationship Data={id, content};

[0071] Step 1.2: Define D1 as the data set to be cleaned, D1={Data 1 ,Data 2 ,…,Data a ,…,Data len(D1) }, Data a is the a-th information data to be cleaned in D1, where len(D1) is the number of data in D1, and the variable ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method and device for extracting labels of user portraits of cold chain stowage based on multimodality. The labeled text set is used as the input of the BERT model, and the long and short-term memory network is used to further extract features; YOLO is used to extract image features , the extracted bimodal features are supplemented with the correlation representation of the activation function tanh and other modal embeddings, the supplemented bimodal feature vectors are spliced, and they are matrix-multiplied with the bimodal condition vector, the result As the input of the Softmax function, the bimodal interaction attention matrix is ​​obtained; the bimodal interaction attention matrix is ​​spliced ​​with the supplemented bimodal features, and it is used as the input of the fully connected layer to obtain the intermodal interaction features and the internal modalities. Features, and finally input to Softmax for classification. The invention utilizes a multi-modal feature fusion algorithm to merge and fuse user features of different modalities, establishes an interaction relationship between different modalities, and reduces extraction noise.

Description

technical field [0001] The invention relates to the technical field of user portraits and multi-modal fusion, in particular to a method and device for extracting labels of user portraits for cold chain stowage based on multi-modalities. Background technique [0002] In recent years, user portraits have become a feature of research around the world, and are increasingly attracting widespread attention from the industry and academia. More importantly, it is also one of the key technologies for many applications. [0003] In the existing user portrait modeling technology, the research on how to integrate multiple data sources or modalities in order to obtain a more accurate user portrait is quite limited, and there are the following shortcomings: 1. Some user portrait research work only involves a single 2. A simple and easy integration method is to directly splicing multiple modal feature vectors when the information is input, but this method ignores the semantic differences o...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F16/35G06V10/762G06K9/62G06V10/764G06V10/80G06V10/774G06V10/82G06N3/04
CPCG06F16/355G06N3/049G06N3/044G06N3/045G06F18/23213G06F18/214G06F18/2415G06F18/253
Inventor 李翔张宁谢乾朱全银高尚兵马甲林王媛媛丁行硕束玮张豪杰丁婧娴张曼费晶茹洪玉昆杨秋实徐伟
Owner 芽米科技(广州)有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products