Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Visual target tracking method based on deep residual network characteristics

A target tracking and network feature technology, applied in the field of visual target tracking based on deep residual network features, can solve problems such as restricting the application of tracking algorithms

Active Publication Date: 2019-05-24
CHANGAN UNIV +1
View PDF8 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The discriminative model algorithm trains a classifier to distinguish the target from the background, and selects the candidate sample with the highest confidence as the prediction result. Due to the limitation of the accuracy and speed of the tracking algorithm, the application of the tracking algorithm in the actual scene is still restricted.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Visual target tracking method based on deep residual network characteristics

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0037] Such as figure 1 Shown, a kind of visual target tracking method based on depth residual network feature of the present invention, comprises the following steps:

[0038] Step 1. Select the feature layer of the deep residual network and calculate the weight corresponding to each feature layer: In the marked public data set, use each layer in the deep residual network ResNet-N to separately classify the marked public data set Extract features from the video, calculate the tracking overlap rate, select the layer with the top three tracking overlap rates to construct the first training sample, and train the convolutional neural network (CNN) 1 , Convolutional Neural Network CNN 1 by the input layer I 1 , convolutional layer C 1 , pooling layer P 1 , convolutional layer C 2 , pooling layer P 2 , convolutional layer C 3 , pooling layer P 3 , fully connected layer F and output layer O 1 Composed, the image sequence to be tracked passes through the convolutional neural...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a visual target tracking method based on deep residual network characteristics. The visual target tracking method comprises the following steps: 1, selecting a characteristic layer of a deep residual network and calculating a weight; 2, extracting features of the first frame of actual input image; 3, constructing a response and initial position filter of the characteristicsof the first frame of actual input image; 4, performing scale sampling and fHOG feature extraction on the first frame of actual input image; 5, constructing an initial scale filter; 6, feature extraction of the second frame of actual input image; 7, position filtering; 8, weighting a position filtering response graph and positioning a target; 9, performing scale sampling and fHOG feature extraction on the target image; 10, performing scale filtering and scale estimation on the target feature vector; 11, updating the filter; And 12, inputting a next frame of actual input image, regarding the next frame of actual input image as a second frame of actual input image, and repeating the step 6. The method is high in tracking precision and success rate, adapts to target scale changes, and achieves the robust tracking of the target.

Description

technical field [0001] The invention belongs to the technical field of target tracking, and in particular relates to a visual target tracking method based on deep residual network features. Background technique [0002] The application of artificial intelligence is very extensive, covering many technical fields, mainly including computer vision, natural language processing, cognition and reasoning, robotics, game and ethics, machine learning, etc. Vision is the most important source of information for the human brain, and it is also the door to enter the palace of artificial intelligence. About 70% of human cerebral cortex activity is processing visual related information. Computer vision uses images (videos) as input to study image information organization, object and scene recognition, etc., and then explain events, so as to realize the expression and understanding of the environment. [0003] As one of the basic problems in the field of computer vision, object tracking ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/246G06N3/04
Inventor 马素刚赵祥模侯志强王忠民惠飞
Owner CHANGAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products