Method for obtaining dynamic texture description model and video abnormal behavior retrieval method

A technology for describing models and dynamic textures, applied in character and pattern recognition, instruments, computer components, etc., can solve the problems that video appearance information is not involved, and video time feature information cannot be obtained, so as to achieve fast calculation, simple implementation, and Efficiently describe the effect

Active Publication Date: 2019-11-08
UNIV OF SHANGHAI FOR SCI & TECH
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Two-dimensional LBP can extract the texture features of two-dimensional graphics, but it can only model static textures, and cannot obtain temporal feature information in videos
However, SCLBP only focuses on the motion information in the video, but does not involve the appearance information in the video

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for obtaining dynamic texture description model and video abnormal behavior retrieval method
  • Method for obtaining dynamic texture description model and video abnormal behavior retrieval method
  • Method for obtaining dynamic texture description model and video abnormal behavior retrieval method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] The method for obtaining a dynamic texture description model and the video abnormal behavior retrieval method of the present invention will be described in more detail below in conjunction with schematic diagrams, which represent a preferred embodiment of the present invention, and it should be understood that those skilled in the art can modify the present invention described here , while still realizing the advantageous effects of the present invention. Therefore, the following description should be understood as the broad knowledge of those skilled in the art, but not as a limitation of the present invention.

[0039] like figure 1 As shown, this embodiment proposes a method for obtaining a dynamic texture description model including the following steps S1-S4, specifically as follows:

[0040] Step S1: Define a pixel point in a given video as a dynamic texture description model for the pixel point; the dynamic texture description model includes an orthogonal vector ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a method for acquiring a dynamic texture description model, which comprises the following steps of: firstly, giving a pixel point in a video, and defining the pixel point as thedynamic texture description model, wherein the dynamic texture description model comprises orthogonal vector group models in three directions, the orthogonal vector group model comprises a central vector and a plurality of adjacent vectors surrounding the central vector, the intersection point of the three orthogonal vector group models is the pixel; calculating an included angle and a binarization included angle between the central vector and the adjacent vectors, and finally solving the model to obtain a mode value of each orthogonal vector group model in the dynamic texture description model; and forming a three-dimensional vector by the mode values obtained in the three orthogonal directions. According to the method, each video clip or space-time block is extracted into one TOSCLBP histogram feature, the TOSCLBP histogram features can reflect change information of dynamic textures in the video space-time blocks in time and space, and the method is robust to interference of noise,illumination change and the like in the video. The invention further provides a video abnormal behavior retrieval method.

Description

technical field [0001] The invention belongs to the field of video signal feature extraction, in particular to a method for acquiring a dynamic texture description model and a video abnormal behavior retrieval method. Background technique [0002] At present, there are many methods for feature extraction in video sequences, mainly including: artificial features and learning features: specifically, 1) learning features refers to obtaining features by optimizing a specific objective function through machine learning algorithms, typically deep learning features. In deep learning, the features obtained by using neural networks such as convolutional neural networks or deep autoencoders can extract important information in the data to obtain concise features, with strong generalization ability and strong versatility, and do not depend on prior knowledge. However, learning features usually depends on a large number of training samples, which is computationally intensive, which is n...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/46G06K9/62
CPCG06V40/20G06V10/50G06V10/467G06F18/28G06F18/2155
Inventor 胡兴段倩倩黄影平张亮杨海马
Owner UNIV OF SHANGHAI FOR SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products