Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video annotation method based on multiple modes

A multi-modal, video technology, applied in the fields of computer vision and video annotation, can solve the problems of complex video files, reduce the quality of aggregated features, and not consider the importance of frames to videos, and achieve accurate aggregation results and improve accuracy. Effect

Active Publication Date: 2020-09-29
HUAZHONG UNIV OF SCI & TECH
View PDF12 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At present, most machine learning methods only label videos based on visual features, but video files are more complex, which contains not only image information, but also audio information. It is not accurate enough to label videos only by visual features of videos, and currently The frame-level feature aggregation method does not take into account the importance of the frame to the video, which greatly reduces the quality of the aggregated features

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video annotation method based on multiple modes
  • Video annotation method based on multiple modes
  • Video annotation method based on multiple modes

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0058] In order to make the objectives, technical solutions and advantages of the present invention clearer, the following further describes the present invention in detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, but not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not conflict with each other.

[0059] Such as figure 1 As shown, the embodiment of the present invention provides a multi-modality-based video tagging method, including:

[0060] S1. Extract the key frames of the video through the clustering method;

[0061] The key frame extraction process is as figure 2 As shown, specifically including:

[0062] S1.1. Taking the first frame of the video as the first category, calculate the color histog...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video annotation method based on multiple modes, and belongs to the technical field of computer vision and video annotation. The method comprises the following steps: extracting a key frame of a video through a clustering method; extracting features of the key frames, and aggregating continuous key frame features through a learning pool to generate visual features of thevideo; extracting audios in the video, and dividing the audios into a plurality of independent frames; extracting audio frame features, and aggregating the continuous audio frame features through a learning pool to generate audio features of the video; fusing the visual features and the audio features and inputting into a prediction module; and performing video tagging. Compared with the prior art, the method has the advantages that visual features and audio features of the video are considered at the same time, and an attention mechanism is added during frame feature aggregation, so that theextracted video features are more representative, and the video annotation accuracy is greatly improved.

Description

Technical field [0001] The present invention belongs to the technical field of computer vision and video annotation, and more specifically, relates to a video annotation method based on multi-modality. Background technique [0002] With the continuous development of the Internet, Internet portals with video applications as the theme have rapidly developed in my country, and video has become a way for people to communicate with each other and share their lives. Every day, a large number of videos are uploaded to domestic video sharing websites, such as Youku and Douyin. Compared with voice, text and other media files, the data structure of video is more complicated, and it provides more useful information, and its content is more vivid, vivid and intuitive. Although video data contains a wealth of information, which is unmatched by other data forms, its complex data format and its ever-increasing amount of data undoubtedly set up huge obstacles for user interaction and affect its...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/78G06F16/75G06F16/783G06K9/62G06N3/04G06N3/08
CPCG06F16/7867G06F16/75G06F16/7834G06F16/7847G06N3/08G06N3/045G06F18/253Y02T10/40
Inventor 李瑞轩刘旺辜希武李玉华
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products