Horrible video scene recognition method based on multi-view and multi-instance learning

A multi-example learning and video recognition technology, applied in the field of horror video scene recognition, can solve problems such as the decline of recognition rate

Active Publication Date: 2013-12-25
人民中科(北京)智能技术有限公司
View PDF6 Cites 19 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Since not all frames of horror videos contain horror information, extracting the average features of the entire horror

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Horrible video scene recognition method based on multi-view and multi-instance learning
  • Horrible video scene recognition method based on multi-view and multi-instance learning
  • Horrible video scene recognition method based on multi-view and multi-instance learning

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

[0068] In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with specific embodiments and with reference to the accompanying drawings.

[0069] figure 1 Shows the scary video scene recognition method based on multi-view and multi-instance learning provided by the present invention. Such as figure 1 As shown, the method specifically includes the following steps:

[0070] Step 1: Perform structured analysis on the video, extract the video shots using the mutual information entropy shot segmentation algorithm based on information theory, and then select the emotional representative frame and the emotional mutation frame for each shot to represent the shot. The specific extraction steps include:

[0071] Step 1.1: Calculate the color emotion intensity value of each video frame in the unit of shot; assuming that the i-th frame image is composed of K rows and L ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a horrible video scene recognition method based on multi-view and multi-instance learning. The horrible video scene recognition method comprises the steps of: extracting video shots from videos in a training video set, and selecting an emotion representative frame and an emotion abrupt frame for each video shot; extracting audio and vision characteristics of each video shot in the training video set, wherein the vision characteristic is extracted on the basis of the extracted emotion representative frame and the extracted emotion abrupt frame; extracting four view characteristic vectors for each video to form a multi-view characteristic set of the training video set; carrying out sparse reconstruction on the obtained multi-view characteristic set corresponding to the training video set and the multi-view characteristic vectors of the videos to be recognized to obtain a sparse reconstruction coefficient; calculating reconstruction errors of the multi-view characteristic vectors of the videos to be recognized and multi-view characteristic sets respectively corresponding to a horrible video set and a non-horrible video set in the training video set according to the sparse reconstruction coefficient, and further determining whether the videos to be recognized are horrible videos.

Description

technical field [0001] The invention relates to the field of pattern recognition and computer network content security, in particular to a horror video scene recognition method based on multi-view and multi-example learning. Background technique [0002] With the rapid development of Internet technology and applications, people's understanding and use of the Internet has become more and more in-depth. At the same time, the Internet has brought a lot of convenience to people's lives, and even changed people's way of life. On the basis of the rapid development of the Internet, the spread of harmful information such as pornography, violence, and terror has become easier and easier. Psychological and physiological studies have shown that terrorist information on the Internet is no less harmful to the physical and mental health of young people than pornographic information. Excessive terrorist information may cause people to be in extreme anxiety and fear for a long time, and ev...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62
Inventor 胡卫明丁昕苗李兵
Owner 人民中科(北京)智能技术有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products