Unlock instant, AI-driven research and patent intelligence for your innovation.

3-d object detection and classification from imagery

a technology of object detection and classification, applied in the field of 3d object detection and classification from imagery, can solve the problems of not providing any information about how or why the detected object is detected, no data available for these objects, and the output of these models is usually limited, so as to achieve the target recognition technique of the present disclosure, the effect of easy and precise, easy and accura

Active Publication Date: 2022-03-03
COVAR LLC
View PDF18 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

[0007]It is an object of this disclosure to describe a target recognition model and system which can accurately detect targets when no data from those targets can be collected. Additionally, it is an object of this disclosure to provide for a target recognition model and system which can enable a user to evaluate and analyze the model's output after the model makes one or more predictions.
[0009]In contrast to standard end-to-end deep learning networks, the proposed invention separates the target detection and classification process into a multi-stage process. The target recognition model disclosed herein can offer a new approach to computer aided object detection and classification. This model can leverage generic, robust background information (e.g., what wheels look like in a vehicle) and known component relationships (i.e., the size and shape of components on the vehicle) to perform reliable, explainable target detection and classification. This model can solve the robustness problem by training a multi-task, multi-purpose data processing backbone which enables robust object detection and component part segmentation. The outputs of this part segmentation network can be then integrated into a component-based target identifier that can be updated using only physical models of new target classes, or 3-D models (e.g., CAD Models). This enables the model to detect new types of objects without requiring any sensor data for these new classes.
[0010]The target recognition technique of the present disclosure is more precise than other target recognition techniques. One of the reasons for this precision is that this technique uses various steps which optimize the target recognition process. For example, many target recognition techniques attempt to directly classify an object. However, in one embodiment, the present disclosure provides for detecting an entity of interest (or broad object class) first, which is much easier and precise than classification of objects. Similarly, the present disclosure provides for segmentation of parts of a detected object (or target), which is also much easier than classification of objects. As such, the final classification of the object based on the segmented parts is easier and more accurate.

Problems solved by technology

Additionally, the output from these models is usually limited to a box drawn around the detected object.
As such, these models do not provide any information about how or why the detected object was recognized or classified.
In fact, oftentimes, there is no data available for these objects.
This lack of data impedes the training process for the model, and thus, reduces the accuracy of the model's predictions.
Furthermore, in these applications, due to the serious consequences that can come about as a result of the prediction, the end-user must be able to immediately interpret and / or verify the system's output, e.g., the end-user must be able to understand how and why the system came up with its conclusion.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • 3-d object detection and classification from imagery
  • 3-d object detection and classification from imagery
  • 3-d object detection and classification from imagery

Examples

Experimental program
Comparison scheme
Effect test

example embodiment

[0055

[0056]FIG. 6 shows an example system 600 implementing the target recognition technique of the present disclosure. In this example embodiment, the system can include a memory 601 storing an image module 610, a detector module 620, a segmentation module 630, a classification module 640, and a user-interface module 650. In this example embodiment, the image module 610 can generate an image and provide the image to the detector module 620. The detector module 620 can detect an entity of interest in the image and pass information about the entity of interest to the segmentation module 630. The segmentation module 630 can detect one or more components of the entity of interest and prepare a segmentation map based on the components detected. The segmentation module 630 can transmit the segmentation map to the classification module 640, which can compare the segmentation map to the projections stored in a database. The classification module 640 can also match a projection to the segmen...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A system and method for recognizing objects in an image is described. The system can receive an image from a sensor and detect one or more objects in the image. The system can further detect one or more components of each detected object. Subsequently, the system can create a segmentation map based on the components detected for each detected object and determine whether the segmentation map matches a plurality of 3-D models (or projections thereof). Additionally, the system can display a notification through a user interface indicating whether the segmentation map matches at least one of the plurality of 3-D models.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS[0001]The present application is a continuation application of U.S. Ser. No. 17 / 001,779 which application was filed on Aug. 25, 2020 and will issue as U.S. Pat. No. 11,138,410 on Oct. 5, 2021.FIELD OF THE INVENTION[0002]The present invention pertains to object detection and processing.BACKGROUND AND SUMMARY[0003]Object detection is a computer technology related to computer vision and image processing that deals with detecting instances of objects of a certain class (e.g., humans, buildings, or cars) in digital images and videos. Methods for object detection generally fall into either classical machine learning-based approaches or deep learning-based approaches. For Classical Machine Learning approaches, it becomes necessary to first define features, then using a technique such as support vector machine (SVM) to do the classification.[0004]3D object recognition involves recognizing and determining 3D information, such as the pose, volume, or sha...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(United States)
IPC IPC(8): G06K9/00G06T7/11G06K9/62G06T7/70G06V10/26G06V10/764
CPCG06K9/00214G06T7/11G06N3/08G06T7/70G06K9/6201G06V20/647G06V20/52G06V10/26G06V10/82G06T2207/20084G06N20/10G06N20/20G06V10/764G06N5/01G06N7/01G06N3/045G06V20/653G06F18/22
Inventor TORRIONE, PETER A.HIBBARD, MARK
Owner COVAR LLC