Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Target matching method for multi-camera system based on deep convolutional neural network

A neural network, multi-camera technology, applied in the field of target matching based on deep convolutional neural network, can solve the problems of camera calibration difficulties, affecting the accuracy of matching results, and extraction problems.

Inactive Publication Date: 2018-02-09
ZHEJIANG GONGSHANG UNIVERSITY
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

For large-scale video surveillance scenes, camera calibration is difficult and complex, and it is difficult to reason about the spatial relationship, temporal relationship, and time difference between each other. Therefore, the currently widely used target matching method between multiple cameras is mainly feature-based target matching. , the effectiveness of feature selection directly affects the accuracy of matching results
However, the extraction of robust features that can effectively represent the target is a difficult problem
Currently commonly used features include color, texture, etc. These features are difficult to maintain good robustness in all monitoring scenarios

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Target matching method for multi-camera system based on deep convolutional neural network
  • Target matching method for multi-camera system based on deep convolutional neural network
  • Target matching method for multi-camera system based on deep convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0024] The method of the invention includes two parts: extraction of target features and classification and identification of targets. Among them, the feature extraction of the target adopts the deep learning method. By constructing a multi-hidden layer neural network, the sample features are transformed layer by layer, and the feature representation of the sample in the original space is transformed into a new feature space to learn more useful features. Then use this feature as the input feature of the multi-class SVM classifier to classify and identify the target, so as to finally improve the accuracy of classification or prediction. attached figure 1 The implementation block diagram of this algorithm is shown, and the specific steps are as follows:

[0025] (1) Preprocessing of the target image: extract n target images in the multi-camera domain and divide them into m labels; use the bicubic interpolation algorithm (bicubic interpolation) to uniformly adjust the image siz...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A multi-camera object matching method based on deep convolutional neural network. The present invention initializes multiple convolution kernels based on the local protection projection method, down-samples the image based on the maximum pooling method, and extracts more robust and representative histogram features through layer-by-layer feature transformation; Support vector machine SVM classifier for classification recognition. When a target enters another camera's field of view from one camera's field of view, its features are extracted and the corresponding target label is marked to realize accurate recognition of the target in the field of multi-camera collaborative monitoring for target handover and tracking.

Description

technical field [0001] The invention belongs to the field of intelligent video monitoring in computer vision, and is suitable for a target matching method based on a deep convolutional neural network in a multi-camera collaborative video monitoring system. Background technique [0002] In large-scale video surveillance places, such as airports, subway stations, squares, etc., for the tracking of targets in a multi-camera collaborative monitoring system, target matching between multiple cameras is a key step. For large-scale video surveillance scenes, camera calibration is difficult and complex, and it is difficult to reason about the spatial relationship, temporal relationship, and time difference between each other. Therefore, the currently widely used target matching method between multiple cameras is mainly feature-based target matching. , the effectiveness of feature selection directly affects the accuracy of matching results. However, the extraction of robust features ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/66G06K9/46
Inventor 王慧燕华璟
Owner ZHEJIANG GONGSHANG UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products