Violent video classification method and system and storage medium

A video classification and violence technology, applied in the field of violent video classification and recognition, can solve the problems of large manpower and cost, the model does not have the generalization ability, the characteristics and knowledge are limited in the size and distribution of training data, etc., to improve the generalization ability. Effect

Pending Publication Date: 2020-12-11
COMMUNICATION UNIVERSITY OF CHINA
View PDF0 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In addition, the current violent video classification mainly relies on violent video features obtained from limited labeled training data, but the features and knowledge learned by this method are limited by the size and distribution of training data.
Unable to effectively integrate external knowledge and labeled data information, lack of network seman...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Violent video classification method and system and storage medium
  • Violent video classification method and system and storage medium
  • Violent video classification method and system and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0068] Such as figure 1 As shown, the violent video classification method provided by the present invention mainly includes the following steps:

[0069] S100. Obtain a sample video data stream, and extract an RGB mode data stream, a motion mode data stream, and an audio mode data stream from the sample video data stream;

[0070] S200. Input the RGB modal data stream, motion modal data stream, and audio modal data stream into their corresponding feature extraction network models, so as to extract RGB data features, motion optical flow features, and audio for describing violent scenes Modal features;

[0071] S300. Input the RGB data features and motion optical flow features into an internal knowledge model based on a multi-scale attention convolutional neural network for processing, so as to obtain new RGB data features and new motion optical flow features with violent key features;

[0072] S400. Input the new RGB data features, new motion optical flow features, and audio ...

Embodiment 2

[0125] Such as Figure 5 As shown, a schematic diagram of the basic composition of the violent video classification system integrated with internal and external knowledge modules in Embodiment 2 of the present invention. According to different functions, the system is mainly divided into the following seven modules (not shown in the figure):

[0126] A sample data extraction module, configured to obtain a sample video data stream, and extract an RGB modal data stream, a motion modal data stream and an audio modal data stream from the sample video data stream;

[0127] The modality feature extraction module is used to input the RGB modality data stream, the motion modality data stream and the audio modality data stream into respective corresponding feature extraction network models respectively, so as to extract the RGB data features used to describe the violent scene, Motion optical flow features and audio modality features;

[0128] The key semantic feature extraction modul...

Embodiment 3

[0134] Furthermore, the present invention also provides a computer storage medium, which is characterized in that a computer program executable by a processor is stored therein, and the computer program implements the above violent video classification method when executed by the processor.

[0135] In addition, the present invention also provides a computer device, which is characterized by including a memory and a processor, and the processor is configured to execute the computer program stored in the memory, so as to realize the above method for classifying violent videos.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a violent video classification method and system integrating internal and external knowledge, a storage medium and computer equipment. The method comprises the steps of extracting features of an RGB mode, a motion mode and an audio mode of a multi-label violent video, and establishing multi-mode feature fusion network feature fusion; providing an internal knowledge module for self-adaptively learning multi-scale violent video features, so that richer and more key information can be learned; introducing an external knowledge module for improving the model from high-levelsemantics; constructing a violence-related violence knowledge graph, and proposing an optimization strategy for eliminating redundancy between different labels to obtain a violence correction matrix correction model; introducing a fusion score smoothing strategy to eliminate prediction errors, and building a violent video classification model integrating internal and external knowledge models. According to the violent video classification scheme designed by the invention, the effectiveness and stability of violent video classification are improved.

Description

technical field [0001] The invention belongs to the technical field of video classification, in particular to a violent video classification and recognition method incorporating external knowledge and external knowledge Background technique [0002] The development of technologies such as the Internet has led to an explosive growth of multimedia video content on the Internet. In recent years, with the high popularity of domestic video live broadcasts, many unpredictable violent content often appears. Public safety is an important guarantee for maintaining social stability, and video surveillance is an effective measure to prevent violent crimes. The size of the city and the large number of surveillance situations make it almost impossible to artificially monitor violence. The movie industry produces thousands of movies every year of different genres, and not all movies are suitable for teens and children, especially movies with violent scenes. Regular viewing of violent mov...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/049G06N3/08G06V20/41G06N3/045G06F18/253
Inventor 吴晓雨张峰岳秋睿
Owner COMMUNICATION UNIVERSITY OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products