View and point cloud fused stereoscopic vision content classification method and system

A technology of stereo vision and classification method, applied in the field of stereo vision content classification system of view and point cloud fusion, can solve the problems of information loss, loss of stereo information, lack of feature identification, etc., to improve reliability, efficient representation and classification , the effect of improving the accuracy

Inactive Publication Date: 2019-05-14
TSINGHUA UNIV +1
View PDF7 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, for the first fusion method, the feature extraction is performed separately, and the advantages of the two feature fusion extraction cannot be fully utilized, which will cause the features extracted by each to lack discrimination.
For the second fusion method, after the point cloud data is projected, a large amount of three-dimensional information will be lost, resulting in information loss

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • View and point cloud fused stereoscopic vision content classification method and system
  • View and point cloud fused stereoscopic vision content classification method and system
  • View and point cloud fused stereoscopic vision content classification method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0024] combine figure 1 and figure 2 Embodiment 1 of the present application will be described.

[0025] Such as figure 1 As shown, this embodiment provides a stereoscopic vision content classification method for view and point cloud fusion, including:

[0026] Step 1, obtain the point cloud data of the object to be classified and the corresponding multiple detection images;

[0027] Specifically, the object to be classified is scanned by a lidar sensor to obtain a set of three-dimensional coordinate points of the object to be classified, which is recorded as point cloud data of the object to be classified, and the point cloud data is usually 1024 or 2048 coordinate points. Then, through image acquisition devices set at different angles, such as cameras, multiple detection images of the object to be classified are obtained at different angles, and the detection images are usually 8 views or 12 views.

[0028] Step 2, according to the neural network model, extract the over...

Embodiment 2

[0067] Such as image 3 As shown, the present embodiment provides a stereoscopic vision content classification system 30 of view and point cloud fusion, including: a data acquisition module, a feature extraction module, a calculation module and a generation module; the data acquisition module is used to obtain the points of objects to be classified Cloud data and corresponding multiple detection images;

[0068]Specifically, the object to be classified is scanned by a lidar sensor to obtain a set of three-dimensional coordinate points of the object to be classified, which is recorded as point cloud data of the object to be classified, and the point cloud data is usually 1024 or 2048 coordinate points. Then, through image acquisition devices set at different angles, such as cameras, multiple detection images of the object to be classified are obtained at different angles, and the detection images are usually 8 views or 12 views.

[0069] In this embodiment, the feature extract...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a view and point cloud fused stereoscopic vision content classification method and system, and the method comprises the steps: step 1, obtaining the point cloud data of a to-be-classified object and a plurality of corresponding detection images; step 2, extracting an overall feature description subset corresponding to the point cloud data and a high-dimensional feature vector set corresponding to the detection image according to the neural network model; step 3, according to the regularization function, calculating a relation score of any high-dimensional feature vectorin the overall feature descriptor set and the high-dimensional feature vector set, and a view enhancement feature corresponding to the high-dimensional feature vector; and step 4, constructing a fusion network model according to the overall feature descriptor set and the view enhancement feature, and generating unified feature representation of the to-be-classified object in combination with therelationship score. According to the technical scheme, the point cloud and the multi-view data are directly and effectively fused on the feature extraction layer, and efficient representation and classification of the three-dimensional object are achieved.

Description

technical field [0001] The present application relates to the technical field of stereo vision classification, in particular, to a stereo vision content classification method for view and point cloud fusion and a stereo vision content classification system for view and point cloud fusion. Background technique [0002] With the rapid development of the high-tech Internet industry, the development and application of stereo vision is an important development direction of the future artificial intelligence industry. Stereo vision content has a variety of modal representations, commonly used multi-view and point cloud representations, etc. Among them, multi-view is to describe the object by taking multiple views from different angles, and the point cloud representation is obtained through lidar scanning A collection of three-dimensional coordinate points to describe an object. There are endless processing methods for multi-view data and point cloud data, and neural networks (Neu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62G06N3/04
Inventor 高跃有昊轩马楠
Owner TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products