Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A multi-layer convolution feature self-adaptive fusion moving target tracking method

A moving target, self-adaptive technology, applied in the field of computer vision, can solve problems such as the inability to comprehensively express the target, the lack of good robustness to appearance changes, and the large difference in tracking performance.

Active Publication Date: 2019-05-28
KUNMING UNIV OF SCI & TECH
View PDF14 Cites 35 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The technical problem to be solved by the present invention is to provide a moving target tracking method for adaptive fusion of multi-layer convolution features to solve traditional manual features such as histogram of oriented gradient (Histogram of Oriented Gradient, HOG), color feature (Color Name, CN) cannot fully express the target. These features are difficult to capture the semantic information of the target, and they are not robust to complex appearance changes such as deformation and rotation. They can track defects with large differences in performance in different scenarios, and can be based on reliable The performance judgment is based on APCE to calculate the weight of each convolutional layer, which improves the tracking accuracy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A multi-layer convolution feature self-adaptive fusion moving target tracking method
  • A multi-layer convolution feature self-adaptive fusion moving target tracking method
  • A multi-layer convolution feature self-adaptive fusion moving target tracking method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0057] Example 1: Such as figure 1 As shown, a moving target tracking method with adaptive fusion of multi-layer convolution features, the specific steps of the method are as follows:

[0058] Step1. Initialize the target of the input image and select the target area, first process the first frame of image, take its target position as the center, and collect an image block whose size is twice the size of the target;

[0059] Step2. Use the trained deep network framework VGG-19 to extract the first and fifth layer convolutional features of the target area as training samples, and use the training samples to train the position filter template.

[0060] Step3. Extract two layers of convolutional features from the target area of ​​the second frame of image to obtain two detection samples, and calculate the correlation scores between the two detection samples and the position filter trained in the first frame, that is, the two-layer feature Response graph.

[0061] Step4. Calculate the we...

Embodiment 2

[0066] Embodiment 2: The following describes specific video processing, Step1, according to the first frame of the input image, take the target position as the center, and collect an image block whose size is twice the size of the target, such as figure 2 (a) Shown.

[0067] Step2. Use the VGG-19 network trained on ImageNet to extract the convolutional features of the target. With the forward propagation of CNN, the semantic distinction between different categories of objects is strengthened, and the spatial resolution that can be used to accurately locate the target is also reduced. For example, the input image size is 224×224, and the full convolution feature output size of the fifth pool layer (pool layer) is 7×7, which is 1 / 32 of the input image size. This low spatial resolution is not enough to be accurate To locate the target locally, in order to solve the above problem, we use bilinear interpolation of the convolution features of the first and fifth layers to the sample s...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a multi-layer convolution feature self-adaptive fusion moving target tracking method, and belongs to the field of computer vision. The method comprises the following steps: firstly, initializing a target area in a first frame of image, and utilizing a trained deep network framework VGG-19 to extract first and fifth layers of convolution features of the target image block,and obtaining two templates through learning and training of a related filter; Secondly, extracting features of a detection sample from the prediction position and the scale size of the next frame andthe previous frame of target, and carrying out convolution on the features of the detection sample and the two templates of the previous frame to obtain a response graph of the two-layer features; calculating the weight of the obtained response graph according to an APCE measurement method, and adaptively weighting and fusing the response graph to determine the final position of the target; And after the position is determined, estimating the target optimal scale by extracting the directional gradient histogram features of the multiple scales of the target. According to the method, the targetis positioned more accurately, and the tracking precision is improved.

Description

Technical field [0001] The invention discloses a moving target tracking method of adaptive fusion of multi-layer convolution features, which belongs to the field of computer vision. Background technique [0002] Moving target tracking is an important research direction in the field of computer vision. It has a very wide range of applications in military and civilian applications, such as battlefield surveillance, intelligent transportation systems, and human-computer interaction. [0003] Since AlexNet achieved great success in image classification in 2012, a series of CNN (Convolutional Neural Network, CNN) frameworks have continuously set new records. Compared with AlexNet, the biggest improvement of VGGNet is to replace a large-size convolution kernel with multiple 3×3 (3×3 is the smallest size that can capture the concept of up, down, left, and right, and center), which enhances the generalization ability of the network. , The Top-5 error rate is reduced to 7.3%. In the VOT20...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/246G06N3/04G06N3/08G06K9/62
Inventor 尚振宏王娜
Owner KUNMING UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products