Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Network violent video identification method

A violent and video technology, applied in the field of video classification problems, can solve the problems of small data volume, performance and processing speed reduction, etc., and achieve the effect of reducing dimensionality and space complexity

Active Publication Date: 2013-07-24
人民中科(北京)智能技术有限公司
View PDF1 Cites 39 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The research objects of these violent video recognition methods are mostly video clips of one movie or several movies, and the amount of data is small. However, for the massive video data on the network, the performance and processing speed of these methods are reduced to varying degrees.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Network violent video identification method
  • Network violent video identification method
  • Network violent video identification method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0012] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be described in further detail below in conjunction with specific embodiments and with reference to the accompanying drawings.

[0013] The invention proposes a method for identifying network violence video. In this method, violent and non-violent videos and their introductions and comments in video sharing websites are collected as samples, a video training set is established, and text features related to the training set video are extracted from the text information of the training set, so that text features can be used to Vector training pre-classifier model, use the pre-classifier to classify new video samples to obtain candidate violent videos, segment the video clips in the training set into shots, extract the bottom-level features such as video and audio of the shots to form feature vectors to represent the shots, and put The shot is regarde...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a network violent video identification method based on multiple examples and multiple characteristics. The method for identifying the network violent videos comprises the steps of grasping violent videos, non-violent videos, comments on the violent videos, comments on the non-violent videos, brief introductions of the violent videos and brief inductions of the non-violent videos from a video sharing network, and structuring a video data training set; extracting textural characteristics from textural information of the training set, forming textural characteristic vectors to train a textural pre-classifier, and screening out candidate violent videos by using the pre-classifier; using a shot segmentation algorithm based on a self-adapting dual threshold for conducting segmentation on video segments of the candidate violent videos, extracting related visual characteristics and voice frequency characteristics of each scene to express the scene, taking each scene as an example of multi-example study, and taking video segments as a package; and using an MILES algorithm for converting the package into a single example, using a characteristic vector for training a classifier model, and using the classifier model for conducting classification on the candidate violent videos. By the utilization of the network violence video identification method, bad influences that the network violent videos are broadcasted without constrain are largely lightened.

Description

technical field [0001] The invention relates to the field of pattern recognition and computer network content security, in particular to the problem of video classification. Background technique [0002] With the rapid development of Internet technology and applications, people's understanding and use of the Internet has become more and more in-depth. Through the Internet, people can obtain rich information and knowledge, communicate conveniently, and enjoy various entertainment activities. However, the Internet is open, and the massive amount of information it carries must have adverse effects, such as pornography, violence, terror and other harmful information are also widely spread along the Internet. The minors are in the growth stage physically and psychologically, and are vulnerable to the bad influence of the outside world. Some even embarked on the road of crime and caused many social problems. Teenagers are on the wrong track. Video sites have sprung up like mushr...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62
Inventor 胡卫明邹星宇吴偶
Owner 人民中科(北京)智能技术有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products