Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Residual depth feature target tracking method for drift detection

A deep feature and target tracking technology, which is applied in the fields of image processing and computer vision, can solve the problems that the tracking speed cannot meet the needs of real-time tracking, the real-time tracking performance can not meet the requirements well, and unfavorable target tracking and other problems

Active Publication Date: 2018-12-07
NANJING UNIV OF INFORMATION SCI & TECH
View PDF5 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In recent years, target tracking has been based on JF Henriques et al. (Henriques J F, Rui C, Martins P, et al. High-Speed ​​Tracking with Kernelized Correlation Filters [J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2014,37(3):583- 596) proposed kernel correlation filtering algorithm is the most representative, has a very high tracking speed, so that people can see the possibility of real-time tracking, but the traditional features used lead to poor tracking effect
In the same year, the deep convolutional neural network proposed by K Simonyan et al. (Simonyan K, Zisserman A.Very deep convolutional networks for large-scale image recognition[J].arXiv preprint arXiv:1409.1556,2014.) was greatly improved in ILSVRC-2014. Brilliant, deep learning began to play a role in computer vision, in 2015 by K He et al (He K, Zhang X, Ren S, et al.Deep Residual Learning for Image Recognition[J].2015:770-778.) The proposed residual network, where the applied residual structure provides a way to train a deeper network, but the real-time performance of tracking cannot meet the requirements well
In subsequent studies, the deep network was used as end-to-end tracking, such as H Nam et al. (Nam H, Han B. Learning Multi-domain Convolutional Neural Networks for Visual Tracking [C] / / Computer Vision and Pattern Recognition.IEEE , 2016:4293-4302.) proposed a multi-domain neural network that takes the original image as input, directly outputs the tracking result, and uses the deep network as a feature extractor, such as M Danelljan et al. (Danelljan M, G, Khan F S, et al.Convolutional Features for CorrelationFilter Based Visual Tracking[C] / / IEEE International Conference on ComputerVision Workshop.IEEE Computer Society,2015:621-629.) The effective convolution operation proposed for tracking, obtained Although we have achieved very good tracking accuracy, we still face the problem that the tracking speed cannot meet the needs of real-time tracking.
At the same time, M Danelljan et al. (Danelljan M, Bhat G, Khan F S, et al. ECO: Efficient Convolution Operators for Tracking [J]. 2016: 6931-6939.) explored the impact of different convolutional layer features on tracking, and concluded that Shallow features are more suitable for the conclusion of tracking; and by C Ma et al. (Ma C, Huang J B, Yang X, et al.Hierarchical Convolutional Features for Visual Tracking[C] / / IEEEInternational Conference on Computer Vision.IEEE Computer Society,2015 :3074-3082.) proposed convolutional layered features for tracking, expounded the impact of neural network shallow and deep features on tracking, effective use of shallow and deep features can significantly improve the tracking effect, but the artificial selection of feature fusion methods Not conducive to accurate target tracking
[0004] The one-sidedness of traditional features and the lack of detection methods and remedies for model drift in traditional tracking models limit the performance of traditional tracking methods

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Residual depth feature target tracking method for drift detection
  • Residual depth feature target tracking method for drift detection
  • Residual depth feature target tracking method for drift detection

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0064] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.

[0065] Such as Figure 4 It is a method frame diagram of the present invention, and concrete steps are as follows:

[0066] (1) Training residual deep feature network

[0067] For the entire network structure see figure 1 , containing 4 convolutional layers, 2 fully connected layers and 1 residual structure. The specific operations included in Conv1 are convolutional layer → BN (Batch Normalization) layer → pooling layer. The convolution layer contains multiple convolution kernels, which can extract features from different aspects, and these features can distinguish the target most essentially; as the normalization layer of the network, the BN layer can normalize the output of the convolution layer, and can Speed ​​up the training network, preve...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a residual depth feature target tracking method for drift detection. Hierarchical features are extracted by using a convolutional neural network; and a residual structure is added into the convolutional neural network and different network layers are connected to realize fusion of shallow and deep features, so that manual design of a feature fusion mode is not required. With the network structure, the feature fusion function is realized automatically; the target and the background are distinguished by the deep features; and the resolution is improved by being compared with that of the traditional feature. When a target position of a current frame is predicted, a strategy of detecting a model drift is put forward and a response strength reduction counter is designed.Counting is carried out by comparing response strengths of adjacent frames; and whether a model drift occurs is determined based on the value of the counter, so that a corresponding model update scheme is used as a remedial measure to realize precise tracking.

Description

technical field [0001] The invention relates to the technical fields of image processing and computer vision, in particular to a residual depth feature target tracking method for drift detection. Background technique [0002] In every traffic section, there are surveillance cameras. They observe the passing vehicles all the time, and confirm and track the identity information of the offending vehicles. Among them, the target tracking technology used is already one of the core research topics in the field of computer vision. It has a wide range of applications in real life, not only in traffic monitoring, smart phones, intelligent robots, autonomous driving, military and other fields, it plays an important role. [0003] Traditional target tracking algorithms cannot achieve good tracking results when encountering difficulties such as target deformation, illumination changes, and background clutter, and cannot meet people's needs. With the vigorous development of deep learnin...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/246G06K9/62
CPCG06T7/248G06T7/251G06T2207/20084G06T2207/20081G06T2207/10016G06T2207/30232G06F18/253
Inventor 胡昭华郑伟钱坤
Owner NANJING UNIV OF INFORMATION SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products