Unlock instant, AI-driven research and patent intelligence for your innovation.

Feature point extraction method, device and storage medium based on convolutional neural network

A convolutional neural network and feature point extraction technology, applied in the field of image processing, can solve the problems of redundant feature dimensions, influence of interpretation ability, and consumption of operands, etc., to reduce data bandwidth and number of layers, optimize precision, and reduce the number of divisions Effect

Active Publication Date: 2021-10-26
UNIV OF ELECTRONICS SCI & TECH OF CHINA
View PDF9 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this processing strategy has certain limitations: (1) Although it has been significantly improved compared to other processing methods, the amount of calculation is still large for embedded platforms with low computing power and no graphics processing unit (GPU). Embedded platforms such as robots and drones are precisely the main application scenarios for feature point extraction as a front-end station for synchronous positioning and mapping
(2) Low-dimensional geometric information is mainly used for feature point detection, but the convolutional neural network encoder with the same depth as the generation of local descriptors (including global semantic information) is used. On the one hand, the feature point detection uses The redundancy of the feature dimension, on the other hand, the backpropagation of the loss function of the feature point extraction will also have a negative impact on the interpretation ability of the convolutional neural network encoder to generate the descriptor
(3) The feature point extraction method based on convolutional neural network first generates a heat map with the same size as the input image when calculating the feature point detection result, where the value of each pixel represents the confidence value of the feature point, and then Perform non-maximum suppression on the heatmap, which consumes a large number of operations and has become a time-consuming bottleneck for the entire system

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Feature point extraction method, device and storage medium based on convolutional neural network
  • Feature point extraction method, device and storage medium based on convolutional neural network
  • Feature point extraction method, device and storage medium based on convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] In order to make the object, technical solution and advantages of the present invention clearer, the implementation manner of the present invention will be further described in detail below in conjunction with the accompanying drawings.

[0037] The traditional convolutional neural network feature point extraction method with a common encoder has good accuracy, but it still has a large computational complexity for embedded platforms and platforms without GPU. The embodiment of the present invention proposes a feature point extraction method based on a convolutional neural network, which greatly reduces the computational complexity while maintaining the accuracy similar to that of the traditional extraction scheme, and provides a solution for the deployment of this type of extraction scheme on an embedded platform. possibility.

[0038] see figure 1 , the feature point extraction method based on the convolutional neural network provided by the embodiment of the present ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a feature point extraction method, device and storage medium based on a convolutional neural network, belonging to the technical field of image processing. The present invention first uses the public convolutional neural network encoder 1 to extract low-dimensional features from the input grayscale image of any size; then decouples feature point detection and descriptor generation, and sends low-dimensional features to feature point detection respectively Decoder and convolutional neural network encoder 2; the descriptor decoder uses the feature point coordinates output by the feature point decoder to interpolate the high-dimensional feature tensor output by the convolutional neural network encoder 2 to extract the corresponding feature point descriptor. While maintaining the accuracy similar to that of the traditional extraction scheme, the present invention greatly reduces the computational complexity and provides possibility for the deployment of the feature point extraction scheme on the embedded platform.

Description

technical field [0001] The invention belongs to the technical field of image processing, and in particular relates to a feature point extraction method, device and storage medium based on a convolutional neural network. Background technique [0002] Feature point extraction refers to detecting points with obvious geometric features from the image input, such as points with large grayscale changes, corner points, and ellipse center points, etc., and expressing the local features around the feature points as feature point descriptors ( Mostly 128-dimensional or 256-dimensional floating-point vector). Among them, local features require dual characteristics of invariance and distinguishability: invariance means that the local images around feature points still have similar descriptors after rotation transformation, perspective transformation, photometric transformation and scaling. Differentiability means that the descriptors of different partial images should be as different a...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/46G06N3/04
CPCG06V10/44G06N3/045
Inventor 周军李静远刘野黄坤
Owner UNIV OF ELECTRONICS SCI & TECH OF CHINA