Unlock instant, AI-driven research and patent intelligence for your innovation.

Target tracking method based on spatio-temporal information fusion

A target tracking, space-time technology, applied in neural learning methods, image analysis, image enhancement, etc., to achieve the effect of improving robustness

Pending Publication Date: 2022-06-10
XIAN UNIV OF TECH
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The object of the present invention is to provide a target tracking method based on spatio-temporal information fusion, which solves the problem that the existing tracking methods only use the target appearance information on the spatial dimension and cannot It is easy to deal with the change of characteristics of the target in the time dimension

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Target tracking method based on spatio-temporal information fusion
  • Target tracking method based on spatio-temporal information fusion
  • Target tracking method based on spatio-temporal information fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0061] In step 1, the ILSVRC2015 dataset is used to train the VGG-M network. The ILSVRC2015 dataset contains a total of 3602 videos, and each video contains 100-120 frames. 10 frames are randomly selected from each video, and each frame generates 50 positive samples and 200 negative samples based on the target region. The loss is calculated by the cross entropy function. The initial learning rate of the network is 0.0001, and the training period is 100.

[0062] Step 2 first set the target state of the first frame in the video The corresponding area is extracted and scaled to a size of 107×107×3. Then input the scaled target area into the feature extraction network VGG-M to obtain a depth feature with a dimension of 4096

[0063] The tracker MDNet based on target appearance features in step 3 also uses the VGG-M network trained in step 1 as a feature extractor. The MDNet input is scaled to a 107×107×3 target area, and the output is the probability that the target area ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a target tracking method based on spatio-temporal information fusion. The target tracking method comprises the following specific steps: selecting a feature extraction network, a base tracker for tracking by using target appearance features, and a position prediction module for predicting by using target motion features; acquiring a tracking video, selecting an area where a target is located in a first frame of the video, and extracting depth features of the area; obtaining a tracking result based on the appearance characteristics of the target by using a base tracker; using a position prediction module to obtain a tracking result based on target motion features; and respectively extracting the depth features of the two tracking results, measuring the similarity between the depth features of the two tracking results and the depth feature of the first frame of target, and taking the maximum similarity as the final tracking result. According to the invention, the utilization of the target appearance features and the target motion features can be automatically switched according to different scenes, and the robustness of the tracking method is improved.

Description

technical field [0001] The invention belongs to the technical field of video target tracking, and relates to a target tracking method based on spatio-temporal information fusion. Background technique [0002] Video object tracking is an important branch of computer vision. The work of video target tracking is to determine the position, shape or occupied area of ​​the tracking target in continuous video image frames, and determine the motion information such as the speed, direction and trajectory of the target. Object tracking has important research significance and broad application prospects. It is mainly used in video surveillance, human-computer interaction, and intelligent transportation. [0003] Most of the existing target tracking methods use the target appearance information in the spatial dimension for tracking, and with the rapid development of deep learning, the feature extraction ability of the target tracking method becomes stronger and stronger, which makes th...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/246G06N3/04G06N3/08G06K9/62G06V10/80G06V10/82
CPCG06T7/246G06N3/08G06T2207/10016G06T2207/20081G06T2207/20084G06N3/044G06N3/045G06F18/251
Inventor 刘龙付志豪
Owner XIAN UNIV OF TECH