Video semantic representation method and system based on multi-mode fusion mechanism and medium

A multi-modal and video technology, applied in neural learning methods, computer components, character and pattern recognition, etc., can solve problems such as multi-modal heterogeneous gaps, difficulty in feature extraction of characterization video data, and semantic gaps

Active Publication Date: 2019-03-15
苏州吴韵笔墨教育科技有限公司
View PDF4 Cites 30 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The diversity of video content and the difference and ambiguity of understanding video content make it difficult to extract features for representing video data, which in turn makes video understanding based on semantic information more challenging.
[0004] Traditional data representation methods, such as vision-based video feature learning methods, can obtain concise representations of videos. However, in order to reasonably construct good features, certain experience and professional domain characteristics are required.
The application of deep learning methods has made remarkable progress in vision tasks, but there are still problems such as "semantic gap" and "multimodal heterogeneous gap"

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video semantic representation method and system based on multi-mode fusion mechanism and medium
  • Video semantic representation method and system based on multi-mode fusion mechanism and medium
  • Video semantic representation method and system based on multi-mode fusion mechanism and medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0073] It should be pointed out that the following detailed description is exemplary and intended to provide further explanation to the present application. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.

[0074] It should be noted that the terminology used here is only for describing specific implementations, and is not intended to limit the exemplary implementations according to the present application. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural, and it should also be understood that when the terms "comprising" and / or "comprising" are used in this specification, they mean There are features, steps, operations, means, components and / or combinations thereof.

[0075] The disclosure first proposes a spatio-temporal feature learning model of an adaptive frame selection mec...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video semantic representation method and system based on a multi-mode fusion mechanism and a medium. Feature extraction: extracting visual features, voice features, motion features, text features and domain features of a video; Feature fusion: performing feature fusion on the extracted visual, voice, motion and text features and domain features through the constructed multi-level hidden Dirichlet distribution topic model; And feature mapping: mapping the fused features to a high-level semantic space to obtain a fused feature representation sequence. The model utilizesthe unique advantages of the theme model in the semantic analysis field, and the video representation mode obtained through model training on the basis of the model has ideal distinction in the semantic space.

Description

technical field [0001] The present disclosure relates to a video semantic representation method, system and medium based on a multimodal fusion mechanism. Background technique [0002] With the explosive growth of data volume in the Internet era, the arrival of the era of media big data has been accelerated. Among them, video, as an important carrier of multimedia information, is closely related to people's life. The evolution of massive data not only requires a great change in the way data is processed, but also poses great challenges to the storage, processing and application of video. An urgent problem to be solved is how to organize and manage data effectively. With the continuous generation of data, due to the limitation of hardware conditions, the data can only be stored in segments or time-sharing, which will inevitably cause different degrees of information loss. Therefore, providing a simple and efficient data representation method for video is meaningful for vid...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06N3/04G06N3/08
CPCG06N3/08G06V20/41G06N3/045
Inventor 侯素娟车统统王海帅郑元杰王静贾伟宽史云峰
Owner 苏州吴韵笔墨教育科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products