Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video target tracking method based on compound sparse model

A technology of sparse model and target tracking, applied in the field of video target tracking based on compound sparse model

Active Publication Date: 2015-04-01
SHANGHAI JIAO TONG UNIV
View PDF6 Cites 19 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] Aiming at the problem that the single-task sparse representation tracking algorithm completely ignores the connection between particles in the prior art and the existing multi-task sparse representation tracking algorithm over-strengthens the commonality between particles, the present invention proposes a video target based on a composite sparse model tracking method

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video target tracking method based on compound sparse model
  • Video target tracking method based on compound sparse model
  • Video target tracking method based on compound sparse model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0039] In this embodiment, the state of the target adopts a six-variable affine model, that is, x t ={a t , b t , θ t ,s t , α t , φ t}, where: 6 parameters represent position coordinates, rotation angle, scale, aspect ratio, and tilt angle respectively; the motion model of the target adopts the random walk model, that is, the new state is obtained from the previous state according to Gaussian distribution sampling, The original observation of the target is the pixel value of the area defined by the state, and then it is down-sampled and compressed into a vector, which becomes the observation actually used in the composite sparse model; the likelihood function p(O t |x t ) is defined by the similarity between the particle appearance and the dictionary template, that is, the reconstruction error of the composite sparse appearance model of the particle obeys the Gaussian distribution. At time t, the current dictionary is D t-1 , n particles are Then in this example,...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video target tracking method based on a compound sparse model and belongs to the field of computer vision. According to the method, a combined sparse coefficient matrix capable of observing all particles is divided into set sparsity, element sparsity and abnormal sparsity under the condition that a compound sparse appearance model is under a particle filtering framework; sharing and unsharing characteristics of the particle in a dictionary and additive sparse noise are represented; norms from L1 to infinite number and norms L1 and 1 are regularized to implement the compound sparsity; the optimization problem is solved by using a direction-variable mutiplier method, so that the higher calculation efficiency is realized. The invention also provides a dynamic dictionary updating method, so that the change of target appearance can be adapted. Experiment shows that the tracking performance and robustness of the algorithm are superior to several compared conventional video target tracking algorithm. The video target tracking method can be applied to the fields of man-machine interaction, intelligent monitoring, intelligent traffic, vision navigation, video retrieval, and the like.

Description

technical field [0001] The invention relates to a technology in the field of video processing, in particular to a video target tracking method based on a composite sparse model, which is applied to human-computer interaction, intelligent monitoring, intelligent transportation, visual navigation, and video retrieval. Background technique [0002] Video tracking is an important problem in the field of computer vision. Its task is to analyze the two-dimensional image sequence captured by the camera, and then continuously locate the target or area of ​​interest. Whether in civilian or military fields, video tracking technology has a wide range of application prospects. Modern intelligent monitoring systems need to automatically detect, track and identify targets in the field of view, capture abnormal situations, and issue early warnings. The lack of a reliable and efficient video object tracking algorithm is one of the main bottlenecks in the field of intelligent surveillance. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/20
CPCG06T7/20G06T2207/10016
Inventor 敬忠良金博王梦潘汉
Owner SHANGHAI JIAO TONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products