Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Target image tracking method based on deep reinforcement learning and space-time context

A spatiotemporal context, reinforcement learning technology, applied in the field of image processing, can solve the problems of tracking drift and slow tracking speed, and achieve the effect of avoiding tracking drift

Pending Publication Date: 2019-11-26
武汉智云星达信息技术有限公司
View PDF5 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Aiming at the problems of tracking drift and slow tracking speed in the target tracking task, the present invention proposes a target tracking model (DRST) based on deep reinforcement learning (Reinforcement Learning) and spatio-temporal context STC (Spatio-Temporal Context) learning

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Target image tracking method based on deep reinforcement learning and space-time context
  • Target image tracking method based on deep reinforcement learning and space-time context
  • Target image tracking method based on deep reinforcement learning and space-time context

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0052] In order to make the object, technical solution and advantages of the present invention more clear, the present invention will be further described in detail below in conjunction with the examples. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0053] 1) Model framework

[0054] Such as figure 2 As shown, at each time step t, the feature extraction network obtains an image x from the input sequence t . Visual features are generated by a feature extraction network. To obtain spatio-temporal features, visual features are first passed through STC and recurrent neural network. Then extract spatio-temporal features c from STC and RNN respectively t and the hidden state h t , where the spatiotemporal features c tWill be called ground-truth. In particular, the RNN also receives the previous hidden state h t-1 as input. In the final stage, at each time step t, ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a target image tracking method based on deep reinforcement learning and space-time context. The target image tracking method comprises the following steps: 1), obtaining an image xt from an input sequence through employing a feature extraction network at each time step t, and enabling the image xt to serve as a visual feature; enabling the visual features to pass through anSTC and a recurrent neural network, then extracting spatial-temporal features ct and a hidden layer state ht from the STC and the recurrent neural network respectively, and taking the spatial-temporal features ct as reference standards; 2) establishing a model; 3) carrying out model training; and 4) performing target tracking according to the prediction position of the model. The target image tracking method and the model provided by the invention have relatively high success rate and precision score in the tracking process, and also reflect that the DRST model based on reinforcement learningand space-time context provided by the invention can realize long-term tracking of the target object, so that tracking drift in the tracking process is avoided.

Description

technical field [0001] The invention relates to image processing, in particular to a target image tracking method based on deep reinforcement learning and spatio-temporal context. Background technique [0002] Different from the successful application of deep learning in visual fields such as target detection and target recognition, deep learning has many difficulties in the field of target tracking. The main problem lies in the lack of training data: deep learning models can effectively learn a large number of labeled training data, but the target Tracking only provides the bounding-box of the first frame as training data, so it is very difficult to train a deep model from scratch for the current target at the beginning of tracking. Aiming at the problems of tracking drift and slow tracking speed in the target tracking task, the present invention proposes a target tracking model (DRST) based on deep reinforcement learning (Reinforcement Learning) and spatio-temporal context...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/246G06N3/04
CPCG06T7/251G06T2207/10016G06T2207/20081G06T2207/20084G06N3/045
Inventor 熊乃学邬春学刘开俊
Owner 武汉智云星达信息技术有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products