Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A method for visual tracking through spatio-temporal context

A space-time context and visual tracking technology, applied in the field of visual tracking of computer vision, can solve the problems of tracking model quality deterioration, tracking target drift, wrong background information, etc., and achieve the effect of alleviating the problem of noise samples and avoiding tracking drift

Active Publication Date: 2022-06-03
HUAQIAO UNIVERSITY +1
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Although the existing techniques achieve the desired tracking results and perform well for long-term tracking, when the target object undergoes complex appearance changes (such as severe occlusion) and disappears in the current frame, it will introduce some wrong background information and will be Passed to the next frame, long-term accumulation will deteriorate the quality of the tracking model and eventually cause tracking target drift

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A method for visual tracking through spatio-temporal context
  • A method for visual tracking through spatio-temporal context
  • A method for visual tracking through spatio-temporal context

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0050] like figure 1 As shown, the overall steps of the present invention are:

[0051] Step 1: Initialize parameters;

[0052] Step 2: Train a context-aware filter to obtain a position model;

[0053] Step 3: The maximum scale response value of the training scale correlation filter is obtained to obtain a scale model; the order of step 2 and step 3 can be exchanged;

[0054] Step 4: The classifier outputs the response graph; the discriminative correlation filter generates the peak-to-side lobe ratio corresponding to the peak of the response graph;

[0055]Step 5: Compare the peak value of the response graph to the peak sidelobe ratio. If the peak value of the response graph is greater than the peak sidelobe ratio, introduce an online random fern classifier for re-detection; if the peak value of the response graph is less than the peak sidelobe ratio, update the position of step 2 The model and the scale model of step 3; if the peak value of the response graph is equal to t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention provides a method for visual tracking through spatio-temporal context, comprising the following steps: Step 1: Initialize parameters; Step 2: Train context-aware filter to obtain position model; Step 3: Train maximum scale response value of scale correlation filter to obtain Scale model; step 4: classifier output response map; discriminant correlation filter generates peak sidelobe ratio corresponding to peak of response map; step 5: compare response map peak to peak sidelobe ratio, if response map peak is greater than peak sidelobe ratio , then introduce an online random fern classifier for re-detection; if the peak value of the response graph is smaller than the peak sidelobe ratio, update the position model of step 2 and the scale model of step 3; if the peak value of the response graph is equal to the peak sidelobe ratio, continue to maintain the current Visual tracking state; step 6: apply the updated position model and scale model to the next frame tracking; return to step 4.

Description

technical field [0001] The present invention relates to the visual tracking field of computer vision, in particular to a method for visual tracking through spatiotemporal context. Background technique [0002] Visual tracking is an important research hotspot in the field of computer vision, and it is widely used in video surveillance, autonomous driving, car navigation, and human-computer interaction. The purpose of tracking is to accurately estimate the position of subsequent frames when the position of the first frame is known. Although it has made great progress in recent years, it still faces challenges from many external factors. For example, in the long-term tracking process, the target usually experiences some external disturbances, such as occlusion, illumination change, deformation, scale change and out-of-view, which will affect the accuracy of visual tracking. [0003] Tracking tasks are generally divided into location estimation and scale estimation, which are ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/246G06T7/73
CPCG06T7/246G06T7/73G06T2207/20024G06T2207/10016
Inventor 柳培忠陈智骆炎民杜永兆张万程
Owner HUAQIAO UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products