Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-camera system target matching method based on deep-convolution neural network

A neural network, multi-camera technology, applied in the field of target matching based on deep convolutional neural network, can solve the problems of camera calibration difficulties, extraction difficulties, affecting the accuracy of matching results, etc.

Inactive Publication Date: 2015-05-13
ZHEJIANG GONGSHANG UNIVERSITY
View PDF2 Cites 46 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

For large-scale video surveillance scenes, camera calibration is difficult and complex, and it is difficult to reason about the spatial relationship, temporal relationship, and time difference between each other. Therefore, the currently widely used target matching method between multiple cameras is mainly feature-based target matching. , the effectiveness of feature selection directly affects the accuracy of matching results
However, the extraction of robust features that can effectively represent the target is a difficult problem
Currently commonly used features include color, texture, etc. These features are difficult to maintain good robustness in all monitoring scenarios

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-camera system target matching method based on deep-convolution neural network
  • Multi-camera system target matching method based on deep-convolution neural network
  • Multi-camera system target matching method based on deep-convolution neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0024] The method of the invention includes two parts: extraction of target features and classification and identification of targets. Among them, the feature extraction of the target adopts the deep learning method. By constructing a multi-hidden layer neural network, the sample features are transformed layer by layer, and the feature representation of the sample in the original space is transformed into a new feature space to learn more useful features. Then use this feature as the input feature of the multi-class SVM classifier to classify and identify the target, so as to finally improve the accuracy of classification or prediction. attached figure 1 The implementation block diagram of this algorithm is shown, and the specific steps are as follows:

[0025] (1) Preprocessing of the target image: extract n target images in the multi-camera domain and divide them into m labels; use the bicubic interpolation algorithm (bicubic interpolation) to uniformly adjust the image siz...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

Disclosed is a multi-camera system target matching method based on a deep-convolution neural network. The multi-camera system target matching method based on the deep-convolution neural network comprises initializing multiple convolution kernels on the basis of a local protective projection method, performing downsampling on images through a maximum value pooling method, and through layer-by-layer feature transformation, extracting histogram features higher in robustness and representativeness; performing classification and identification through a multi-category support vector machine (SVM) classifier; when a target enters one camera field of view from another camera field of view, performing feature extraction on the target and marking a corresponding target tag. The multi-camera system target matching method based on the deep-convolution neural network achieves accurate identification of the target in a multi-camera cooperative monitoring area and can be used for target handoff, tracking and the like.

Description

technical field [0001] The invention belongs to the field of intelligent video monitoring in computer vision, and is suitable for a target matching method based on a deep convolutional neural network in a multi-camera collaborative video monitoring system. Background technique [0002] In large-scale video surveillance places, such as airports, subway stations, squares, etc., for the tracking of targets in a multi-camera collaborative monitoring system, target matching between multiple cameras is a key step. For large-scale video surveillance scenes, camera calibration is difficult and complex, and it is difficult to reason about the spatial relationship, temporal relationship, and time difference between each other. Therefore, the currently widely used target matching method between multiple cameras is mainly feature-based target matching. , the effectiveness of feature selection directly affects the accuracy of matching results. However, the extraction of robust features ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/66G06K9/46
Inventor 王慧燕王勋何肖爽陈卫刚
Owner ZHEJIANG GONGSHANG UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products