Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-source multi-view-angle transductive learning-based short video automatic tagging method and system

An automatic labeling, multi-view technology, applied in the field of short video labeling, can solve the problem of lack of multi-source and multi-view feature fusion capabilities.

Inactive Publication Date: 2017-07-04
NORTHEAST GASOLINEEUM UNIV
View PDF8 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The above methods only perform semantic annotation through visual analysis, and none of them have the ability to fuse multi-source and multi-view features, and are not suitable for short video media with multi-source and multi-view descriptions.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-source multi-view-angle transductive learning-based short video automatic tagging method and system
  • Multi-source multi-view-angle transductive learning-based short video automatic tagging method and system
  • Multi-source multi-view-angle transductive learning-based short video automatic tagging method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0101] The present invention will be described below based on examples, but it should be noted that the present invention is not limited to these examples. In the following detailed description of the invention, some specific details are set forth in detail. However, the present invention can be fully understood by those skilled in the art about the parts that are not described in detail.

[0102] In addition, those of ordinary skill in the art should understand that the provided drawings are only for illustrating the objects, features and advantages of the present invention, and the drawings are not actually drawn to scale.

[0103]At the same time, unless the context clearly requires, the words "include", "include" and other similar words in the entire specification and claims should be interpreted as an inclusive meaning rather than an exclusive or exhaustive meaning; that is, "include but not limited to the meaning of ".

[0104] figure 1 It is a flowchart of a short vi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-source multi-view-angle transductive learning-based short video automatic tagging method. The method comprises the steps of obtaining short video data; preprocessing the short video data to generate image key frames consistent in format, audio tracks, texts and semantic tags; extracting multi-view-angle eigenvectors of the image key frames, the audio tracks and the texts; establishing a short video tagging database, wherein the multi-view-angle eigenvectors and the semantic tags are stored in the short video tagging database; calculating the similarity between the multi-view-angle eigenvectors; establishing a multi-view-angle fusion space through the similarity between the multi-view-angle eigenvectors; and transductively solving the multi-view-angle fusion space, and automatically tagging the semantic tags to the to-be-tagged short video data. The invention furthermore discloses a multi-source multi-view-angle transductive learning-based short video automatic tagging system. According to the method and the system, multi-source information attached to the short video data is fully considered, so that the tagging accuracy is improved.

Description

technical field [0001] The invention relates to the field of short video tagging, in particular to a short video automatic tagging method and system based on multi-source and multi-view direct push learning. Background technique [0002] With the development of mobile communication technology and Internet technology and the popularization of various smart terminals, short videos shot through mobile phones, tablet computers and other terminals and shared in social circles have become popular social applications among users. This short video social method originated from the video sharing website Vine in 2013. Its mobile client can limit the shooting time of short videos to 6 seconds. Various short video applications, such as Instagram, Meipai, WeChat, Weibo, Tencent Weishi, Weikepai, etc., have developed rapidly in recent years, and the duration of short videos has also expanded to 60 seconds. Short videos are seamlessly connected to various social platforms on the Internet,...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F17/30G06K9/00
CPCG06F16/783G06F16/7867G06V20/41
Inventor 田枫尚福华周凯
Owner NORTHEAST GASOLINEEUM UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products