Moving target visual tracking method based on multi-source information fusion

A multi-source information fusion, moving target technology, applied in image analysis, instrumentation, computing and other directions, can solve problems such as the inability to represent discrete frame events

Active Publication Date: 2021-04-20
DALIAN UNIV OF TECH
View PDF2 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Hu et al. collected a large-scale event-based tracking dataset. They collected event data by placing an event camera in front of the display and r

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Moving target visual tracking method based on multi-source information fusion
  • Moving target visual tracking method based on multi-source information fusion
  • Moving target visual tracking method based on multi-source information fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0042] The present invention will be described in further detail below in conjunction with specific embodiments, but the present invention is not limited to specific embodiments.

[0043] A moving target visual tracking method based on fusing event and frame domain information features, including the production of data sets and the training and testing of network models.

[0044] (1) Production of training data set

[0045] In order to mark the moving target for the grayscale frame and event stream of the event camera, two steps need to be completed: camera coordinate system transformation and coordinate transformation of the target positioning point.

[0046] To realize the coordinate system transformation between the event camera and the VICON system, we first determine the event camera matrix K and the distortion coefficient d using a calibration board. Then, we can get the rotation vector r and translation vector t of DAVIS346 by the following formula,

[0047] r,t=S(K,d...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the technical field of computer vision, and provides a moving target visual tracking method based on multi-source information fusion. Aiming at a moving target visual tracking task in a scene with rapid movement and severe illumination, the invention firstly manufactures a moving target tracking data set based on an event camera, and meanwhile, provides a visual target tracking algorithm based on cross-domain attention to accurately track a visual target based on the data set. According to the invention, the respective advantages of the frame image and the event data can be utilized and combined; the frame image can provide rich texture information; and the event data can still provide clear object edge information in a challenging scene. By respectively setting the weights of the two kinds of domain information in different scenes, the advantages of the two kinds of sensors can be effectively fused, so that the target tracking problem under complex conditions is solved.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and in particular relates to a method for visually tracking a moving target based on deep learning and using frame images and event streams output by an event camera. Background technique [0002] Moving target tracking is an important topic in computer vision. It needs to track the target in the remaining frames of the video based on the size and position of the target in the first frame of the given video. Methods based on convolutional neural networks (CNNs) have shown excellent performance in this field, and most methods rely on traditional frame images (RGB images or grayscale images) to complete tracking, but frame image-based trackers Tracking performance will drop dramatically under harsh conditions (eg, low light, fast motion situations, etc.). In order to improve the robustness of trackers in harsh environments, methods based on multimodality are gradually proposed, such as dep...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/246G06T7/13
Inventor 傅应锴杨鑫张吉庆尹宝才魏小鹏
Owner DALIAN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products