Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Visual tracking method based on spatial attention feature aggregation

A technology of visual tracking and attention, applied in the field of visual tracking system, can solve problems such as difficult to achieve real-time running speed and high cost of model retraining, and achieve the effect of improving tracking effect and strong perception

Active Publication Date: 2021-01-22
FUZHOU UNIV
View PDF6 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the current offline method relies on large-scale offline training, and the cost of model retraining is high, while the online method (updating the appearance model online) using deep features is difficult to achieve real-time running speed.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Visual tracking method based on spatial attention feature aggregation
  • Visual tracking method based on spatial attention feature aggregation
  • Visual tracking method based on spatial attention feature aggregation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0046] The present invention will be further described below in conjunction with the accompanying drawings and embodiments.

[0047] It should be pointed out that the following detailed description is exemplary and is intended to provide further explanation to the present application. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.

[0048] It should be noted that the terminology used here is only for describing specific implementations, and is not intended to limit the exemplary implementations according to the present application. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural, and it should also be understood that when the terms "comprising" and / or "comprising" are used in this specification, they mean There are features, steps, operations, means, components and / or combina...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a visual tracking method based on spatial attention feature aggregation, and the method comprises the steps: S1, obtaining the first-frame marking information of a video or image sequence, inputting a first-frame image into a template branch of a twin network tracking architecture, and extracting shallow features, intermediate features and deep features of a regional target; wherein the twin network tracking architecture comprises a template branch and a search branch; S2, constructing a spatial attention mask, and calculating features after mask operation; S3, calculating an accuracy quality value for the shallow layer features, and calculating a robustness quality value for the intermediate layer features; selecting a feature channel based on a regional feature aggregation method, and connecting the feature tensor as an aggregation result; and S4, adjusting deep part network parameters by using a regression method according to the prior label, recalculating the output and aggregation of the deep features, and performing convolution operation on the search branches of the twin network tracking architecture by using all the obtained aggregation features astemplates to obtain an output result. According to the invention, visual tracking can be effectively carried out.

Description

technical field [0001] The invention relates to the technical field of image processing, in particular to a visual tracking system based on spatial attention feature aggregation. Background technique [0002] Computer vision is the science of how to make computers "understand" and "perceive" through video and image sequences. In the field of computer vision, video-based target tracking (visual tracking) is an important problem and research direction. The visual tracking task aims to track the "interesting" target in the video or image sequence. Generally speaking, the target will be marked in the first frame, and the tracking method needs to predict the corresponding target estimate for the subsequent frames. Visual tracking technology is a common component of large-scale visual systems. It has important applications in intelligent video surveillance, modern military, video-based human-computer interaction, intelligent transportation systems, intelligent visual navigation, ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/246G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06T7/246G06N3/08G06T2207/10016G06V20/41G06V20/52G06N3/045G06F18/214
Inventor 柯逍李悦洲叶宇
Owner FUZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products