Public scene intelligent video monitoring method based on vision saliency and depth self-coding

A technology for intelligent video surveillance and public scenes, applied in the field of image processing, can solve problems such as personnel attention and work efficiency decline

Active Publication Date: 2017-10-20
BEIHANG UNIV
View PDF1 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0002] In recent years, monitoring equipment has been used in all walks of life. Modern airports, stations, hospitals and other public scenes have covered thousands of monitoring devices. Due to the large amount of video data, security personnel analysis alone can filter out normal Behavior, discovering abnormal be

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Public scene intelligent video monitoring method based on vision saliency and depth self-coding
  • Public scene intelligent video monitoring method based on vision saliency and depth self-coding
  • Public scene intelligent video monitoring method based on vision saliency and depth self-coding

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

[0033] In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0034] The intelligent video monitoring method for public scenes based on visual saliency and deep self-encoding according to the present invention includes: decomposing a single frame of video in the public scene, and extracting motion information using visual saliency in the decomposed video frames , And then calculate the optical flow of the moving object in adjacent frames, including the size and direction of the motion speed. The subsequent detection process is divided into two processes: training and testing. In training, the optical flow of the training sample is used as the input of deep auto-encoding. Train the entire deep auto-encoding network by minimizing the loss function. In the test phase, the optical flow of the training a...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a public scene intelligent video monitoring method based on vision saliency and depth self-coding. The method includes performing single-frame decomposition on a video, using visual saliency to extract motion information, then calculating optical flow of a moving object in adjacent frames, dividing a later detection process into a training process and a testing process, during training, using optical flow of a training sample as input of self-coding, training a whole self-coding network through minimization of a loss function, in the test stage, using optical flow of the training sample and a test sample as input, extracting a coder in the trained self-coding network, extracting features of the input through dimensionality reduction, then visualizing a result after dimensionality reduction, utilizing a suprasphere to represent a visual range of the training sample, when the test sample is input, utilizing the same method to realize visualization, and if a sample visualization result falls in a suprasphere range, judging the sample to be normal; and otherwise, if the sample visualization result falls beyond the suprasphere range, judging the sample to be abnormal, thereby realizing intelligent monitoring of the video.

Description

technical field [0001] The invention relates to image processing technology, in particular to an intelligent video monitoring method for public scenes based on visual salience and deep self-encoding. Background technique [0002] In recent years, monitoring equipment has been used in all walks of life. Modern airports, stations, hospitals and other public scenes have covered thousands of monitoring devices. Due to the large amount of video data, security personnel analysis alone can filter out normal Behavior, discovering abnormal behavior in time is a huge workload, and with the increase of the number of analysis, the attention and work efficiency of personnel will be significantly reduced. In order to liberate people from a large number of analysis and understanding, a research An intelligent video surveillance method is of great significance. [0003] The intelligent monitoring system mainly involves three parts: the extraction of motion information in the video, that is...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/246G06T7/254
CPCG06T7/246G06T7/254G06T2207/10016G06T2207/10024G06T2207/20024G06T2207/20081G06T2207/20084
Inventor 王田乔美娜陈阳陶飞
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products