Visual semantic embedding method and system based on data enhancement

A semantic and data technology, applied in the field of visual semantic embedding method and system based on data enhancement, to achieve good generalization ability and improve the effect of convergence speed

Pending Publication Date: 2022-04-08
NAT UNIV OF DEFENSE TECH
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] Technical problem: Aiming at the problem that the existing visual semantic embedding methods are difficult to build semantic associations within the modal or generate a unified representation, the present invention provides a data-enhanced visual semantic embedding method and system, by reconstructing visual semantic embedding The whole process, and the generalization ability of the model is improved through data enhancement, so that the present invention can well build semantic associations within modals and generate unified representations

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Visual semantic embedding method and system based on data enhancement
  • Visual semantic embedding method and system based on data enhancement
  • Visual semantic embedding method and system based on data enhancement

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0050] The present invention will be further described below in conjunction with embodiment and accompanying drawing. figure 1 A flow chart showing a visual semantic embedding method based on data enhancement in an embodiment of the present invention; figure 2 A structural diagram of the visual semantic embedding model composed of all models in the embodiment of the present invention is shown. combine figure 1 and figure 2 As shown, the method includes:

[0051] Step S100: Receive image data and text data.

[0052] Step S200: Use the first network model to perform target recognition on the image, and select several image regions according to the confidence; use the second network model to extract the fine-grained features of each of the image regions, and fine-tune the network to fine-grain the image features Mapped to the common embedding space, a fine-grained representation of the image in the common embedding space is obtained.

[0053] In one embodiment ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a visual semantic embedding method and system based on data enhancement, and belongs to the technical field of deep learning. Performing target identification on the image by using the first network model to select a plurality of image areas; extracting fine-grained features of the image region by using a second network model, and obtaining fine-grained representation of the image in the common embedding space through fine-tuning network mapping; first semantic graph reasoning is carried out, and unified pooling operation is carried out; using a first extraction model to extract word vector representation related to the text context; performing fine-tuning mapping to the common embedding space by using a second extraction model to obtain word vector representation of the text in the common embedding space; second semantic graph reasoning is carried out, and unified pooling operation is carried out; and performing semantic alignment on the first pooling result and the second pooling result, and enhancing data in model training. According to the method, the semantic association in the modal can be well constructed, and the unified representation can be generated in the common embedded subspace.

Description

technical field [0001] The invention belongs to the technical field of deep learning, and in particular relates to a visual semantic embedding method and system based on data enhancement. Background technique [0002] Image-text cross-modal entity discrimination aims to find image-text pairs with the same semantics. However, since images and texts are data belonging to different modalities, it poses a great challenge for the semantic alignment of images and texts [0003] Image-text cross-modal entity recognition methods can be divided into traditional methods and deep learning methods. Traditional methods generally use statistical analysis methods to learn the mapping matrix of cross-modal data by statistically analyzing the distribution of different modal information to achieve semantic alignment. Among them, the most representative method is Canonical Correlation Analysis (CCA). Researchers have proposed a variety of CCA-based methods, such as KCCA, Multi-view CCA and ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F40/30G06F40/242G06N5/04G06N3/04G06N3/08
Inventor 曹建军曾志贤翁年凤袁震江春丁鲲蒋国权
Owner NAT UNIV OF DEFENSE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products