Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Systematic target identification and tracking method based on multi-modal data

A target recognition, multi-modal technology, applied in image data processing, instruments, computing and other directions, can solve the problems of no laser points, increased sparseness of point clouds, redundant recognition system and increased computing load, etc. There are many unknown interferences, the effect of overcoming the large computational load and enhancing the adaptability

Pending Publication Date: 2021-12-07
CHINA NORTH VEHICLE RES INST
View PDF0 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] 1. The target recognition method based on color images is easily affected by changes in illumination. In the case of blackout, ordinary color images will not be able to provide pixel information for recognition
[0007] 2. The sparseness of the point cloud increases sharply with the increase of the distance, and there is no laser point that can represent the target on the distant target, resulting in the inability to recognize the distant target
[0008] 3. Carry out target recognition and tracking on multiple modal data, and then fuse them at the output end, which can make up for the inherent shortcomings of different data types, but it ignores the correlation between various data types during the recognition process, and at the same time causes the recognition system to Redundancy and increased computational load
[0009] Based on the above shortcomings, the existing unmanned ground platforms cannot work stably for a long time in an unstructured environment

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Systematic target identification and tracking method based on multi-modal data

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] The present invention will be described in detail below with reference to the accompanying drawings and examples.

[0039] This embodiment provides a systematic target recognition and tracking method based on multi-modal data, see attached figure 1 ,Specific steps are as follows:

[0040] Step 1, the ground unmanned platform is loaded with lidar, camera and inertial measurement unit (English is Inertialmeasurement unit, referred to as IMU); wherein, the IMU can be equipped with GPS or not; Collect the point cloud data of laser ranging in the environment, collect the image data of the environment around the ground unmanned platform through the camera, and collect the pose data of the ground unmanned platform through the IMU. The point cloud data, image Data and pose data are multi-modal data;

[0041] Step 2, time synchronization processing is performed on the collected multimodal data to ensure time consistency of each multimodal data;

[0042]Step 3: Use the offline...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a systematic target identification and tracking method based on multi-modal data. The method comprises the following steps: carrying out online fusion on point cloud data and image data, and generating RGB-D three-dimensional data images; taking the RGB-D three-dimensional data images as input of a deep neural network, and training the deep neural network through a database, wherein the trained deep neural network can output types and distances of various targets in the RGB-D three-dimensional data images and complete identification of the various targets; and for the identified targets, outputting the distances and sizes of the targets and two-dimensional coordinates of target pixels, extracting image features of the targets by using the deep neural network to match identification output of each frame, and predicting, screening and tracking relative positions of the targets by fitting and updating a target motion model so as to track detected specific targets. According to the invention, target identification and tracking of a ground unmanned platform in an unstructured complex environment in which the vehicle driving speed is high, much environmental unknown interference exists, and target appearance positions are random can be realized.

Description

technical field [0001] The invention belongs to the technical field of ground unmanned platform target identification and tracking, and in particular relates to a systematic target identification and tracking method based on multimodal data. Background technique [0002] The ground unmanned platform has the characteristics of high maneuverability and high adaptability. To increase the driving speed of the unmanned platform, it is necessary to improve the detection accuracy and increase the detection frequency. In order to improve its adaptability, multi-modal data is required. Respond to changes in the surrounding environment and the occurrence of extreme situations. When the ground unmanned platform maneuvers autonomously, the multimodal data it collects needs to focus on a wide variety of obstacles or targets. To be able to distinguish different road terrains, it can select one of the detection data for different detection tasks. Different sensors have Corresponding to th...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/246G06T7/277G06T7/285G06T7/80
CPCG06T7/246G06T7/277G06T7/285G06T7/85G06T2207/10012G06T2207/10024G06T2207/10028G06T2207/10044
Inventor 卢彩霞梁震烁赵熙俊于华超程文余雪玮刘雪妍
Owner CHINA NORTH VEHICLE RES INST
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products