Target matching method among multiple cameras based on multi-feature fusion and incremental learning

A multi-feature fusion and incremental learning technology, applied in the field of intelligent video surveillance, can solve the problems of low real-time performance, weak multi-type target recognition ability, slow matching speed, etc., to shorten the recognition time, avoid the dimensional disaster, The effect of reducing feature dimension

Active Publication Date: 2013-10-02
ZHEJIANG GONGSHANG UNIVERSITY
View PDF2 Cites 17 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Traditional color histogram features are easily affected by illumination changes and the optical characteristics of the camera itself, and the ability to recognize multiple types of targets is not strong
Although the SIFT feature has strong adaptability to the deformation and illumination changes of the image target, and the positioning accuracy is relatively high, when the SIFT feature is d

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] The method of the invention includes three parts: representation of target features, online update of target model and target recognition. The target matching method we propose is to combine the target's hierarchical vocabulary tree histogram features, color histogram features and kernel PCA algorithm to build a target feature representation model. We call this model CVMFH (competitive major feature histogram fusion representation ), and then use the fusion feature as the input feature of the multi-class SVM classifier to classify and identify the target. At the same time, we introduce incremental learning into the field of video surveillance, integrate the idea of ​​incremental learning into the classifier, and build an incremental SVM classifier, so that the target model can be continuously updated online during the process of target classification and recognition. Specific steps are as follows:

[0028] Step (1) Construct the histogram feature of the hierarchical vo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a target matching method among multiple cameras based on multi-feature fusion and incremental learning. The method comprises the steps that a feature model of a target relates to an SIFT (Scale Invariant Feature Transform) feature of an extracted target; the feature is quantized on an established hierarchical vocabulary tree to form a hierarchical vocabulary tree histogram feature; a color histogram feature is extracted; a preliminary fusion feature is obtained according to the two histogram features; kernel PCA (Principal Component Analysis) dimensionality reduction is conducted on the fusion feature; a nonlinear fusion feature is extracted; classification and identification of the target are that a multi-target nonlinear fusion feature is sent into a multi-class SVM (Support Vector Machine) classifier for the classification and the identification; on-line updating of a target model is accomplished by conducting the incremental learning on the multi-class SVM classifier; and when a new target appears in visual fields of the cameras, and the appearance and the shape of the target are changed greatly, the target model is updated continuously by the incremental SVM learning. The method fuses the vocabulary tree histogram feature of the target with the color histogram feature, and increases an identification rate of the target significantly.

Description

technical field [0001] The invention belongs to the field of intelligent video monitoring in computer vision, and is suitable for a target matching method based on multi-feature fusion and incremental learning for non-overlapping multi-camera video monitoring. Background technique [0002] In large-scale video surveillance sites (such as airports, subway stations, squares, etc.), for continuous tracking of targets in a multi-camera environment with no overlapping domains, target matching between multiple cameras is a key step. Here, target matching refers to a process in which the system automatically assigns corresponding target labels to multiple targets when multiple targets enter the field of view of another camera from one camera field of view. Traditional multi-camera target matching methods include feature-based target matching and tracking-based target matching. However, in a non-overlapping video surveillance environment, the cameras are relatively independent, and ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62G06K9/46
Inventor 王慧燕郑佳
Owner ZHEJIANG GONGSHANG UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products