Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Radar-Image Cross-modal Retrieval Method Based on Deep Hash Algorithm

A hash algorithm and image technology, applied in the field of radar-image cross-modal retrieval, can solve problems such as incompleteness and blurred images, and achieve the effect of small storage space, small data storage space, and fast retrieval speed

Active Publication Date: 2020-12-11
TSINGHUA UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] The purpose of the present invention is to propose a radar-image cross-modal retrieval method based on the deep hash algorithm to solve the problem that the image obtained by the camera is blurred or incomplete when the mobile robot is at night or when the light is relatively dark

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Radar-Image Cross-modal Retrieval Method Based on Deep Hash Algorithm
  • Radar-Image Cross-modal Retrieval Method Based on Deep Hash Algorithm
  • Radar-Image Cross-modal Retrieval Method Based on Deep Hash Algorithm

Examples

Experimental program
Comparison scheme
Effect test

Embodiment c

[0067] 3) see figure 1 , train the deep hash network, input the point cloud files and images preprocessed in step 2) into the deep hash network, and construct a similarity matrix S at the same time, so that the data between different modalities are correlated, so as to obtain the image Deep Learning Subnetwork Parameters θ x and point cloud deep learning subnetwork parameters θ y ; The specific method is as follows: wherein in the PointNet network, the input is a collection of all point cloud files of a frame, expressed as a vector of Nx3, wherein N represents the number of point clouds, N=3000, and 3 corresponds to the three components of the Cartesian coordinates, input The point cloud files in the training set are first aligned by multiplying the transformation matrix learned by a T-Net (an alignment network, which is part of the point cloud depth sub-network), which ensures the invariance of the PointNet network to specific space transformations. , after extracting the f...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the field of machine learning and intelligent control, and provides a radar-image cross-modal retrieval method based on a deep hash algorithm. The method comprises the following steps: firstly, acquiring a point cloud file and an image by utilizing a sensor on the mobile robot to construct a training set and a test set; and respectively inputting the point cloud and the image file in the training set into the constructed deep hash network, carrying out feature learning and respectively obtaining respective binary hash codes so as to train the deep hash network; duringretrieval, obtaining respective binary hash codes of the point cloud file and the image in the test set according to the trained deep hash network, and calculating a Hamming distance between the pointcloud file and the image in a public Hamming space so as to obtain an image most similar to the point cloud file to be tested, namely a result to be retrieved. According to the invention, under the condition that the image of the camera is blurred or incomplete due to environmental factors, the image file most similar to the point cloud can be retrieved from the point cloud, so that the surrounding environment can be better perceived, and rich image information can be obtained.

Description

technical field [0001] The invention belongs to the fields of machine learning and intelligent control, and relates to a radar-image cross-modal retrieval method based on a deep hash algorithm. Background technique [0002] With the rapid development of modern technology, mobile robots are also developing rapidly. Along with human-computer interaction comes the development of sensors, which generally include lidar, camera, GPS, and ultrasonic radar. Although the sensor is only one part of the mobile robot, its role is beyond imagination. When a mobile robot uses a single sensor for environment perception, there will inevitably be some errors in the data information it collects. And in complex and changing environments, these errors will be even greater. Therefore, multiple sensors need to be used in combination. [0003] The application and research of sensor technology on mobile robots are getting more and more in-depth. Since the external sensor is a product integrati...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F16/587
CPCG06F16/587
Inventor 刘华平徐明浩张新钰孙富春
Owner TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products