Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for detecting specific contained semantics of video based on grouped multi-instance learning model

A technology for multi-example learning and detection methods, applied in TV, color TV, character and pattern recognition, etc., can solve problems such as loss of effective information

Active Publication Date: 2011-08-17
SHANGHAI JIAO TONG UNIV
View PDF3 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method does not further optimize the video format, but focuses on the storage format
At the same time, this method uses the key frame processing method, which may lead to the loss of effective information

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for detecting specific contained semantics of video based on grouped multi-instance learning model
  • Method for detecting specific contained semantics of video based on grouped multi-instance learning model
  • Method for detecting specific contained semantics of video based on grouped multi-instance learning model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0041] The embodiments of the present invention will be described in detail below. This embodiment is implemented on the premise of the technical solution of the present invention. Detailed implementation modes and specific operation procedures are given, but the protection scope of the present invention is not limited to the following implementations. example.

[0042] Such as figure 1 -Such as image 3 As shown, this embodiment includes the following steps:

[0043] The first step is to divide the video. The specific steps are:

[0044] i. Make a database

[0045] This step is essentially to select videos with the same content. In this embodiment, 250 videos that have been investigated and counted and contain specific semantics are selected. Taking the reorganized video as an example, we compare some unknown target videos and how close they are to this part of the video.

[0046] Step 2: Convert video to image

[0047] Since the format of the video is different, first use ffmpeg t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a method for detecting specific contained semantics of a video based on a grouped multi-instance learning model in the technical field of computer video treatment, which comprises the following steps: continuously cutting the video according to the shots, thereby acquiring a plurality of video segments; using a FFMPEG tool to intercept image describers for each video segment Sij, wherein averagely 25 pictures are intercepted from each video segment at the same interval; extracting the related audio describers by using a video audio track, intercepting the video describers by using a video screenshot set, and intercepting the motion degree by using the video; performing machine learning on each set of describers; and acquiring a result after performing the machine learning, performing an European distance calculation on the learning result and one describer of one target video, and using the acquired minimum value as the approaching degree of the original video under the description of the describer for the target video.

Description

Technical field [0001] The present invention relates to a method in the technical field of computer video processing, in particular to a video specific inclusion semantic detection method based on an MGIL (Multiple Grouped Instance Learning) model. Background technique [0002] In today's prevailing network environment, network video has become an indispensable part of the lives of many network users. On the Internet, Tudou, Youku, Youtube and other video sites provide users with a variety of colorful video programs; film and television have become the most indispensable way of leisure for people all over the world, and a good economic cycle has been formed. However, in practice, some videos are not suitable for publishing on the website. These videos and TV shows may have an adverse effect on the development of young people. At the same time, because some videos are protected by copyright, these videos need special treatment to prevent copyright infringement on the website. T...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F15/18G06K9/62H04N5/262
Inventor 蒋兴浩孙锬锋沈楚雄吴斌张善丰储曦庆樊静文
Owner SHANGHAI JIAO TONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products