Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Vegetable and background segmentation method based on inertial measurement unit and visual information

An inertial measurement unit and visual information technology, applied in the field of image processing, can solve problems such as complicated use and a large number of manual markings, and achieve the effect of high precision and wide use scenarios

Active Publication Date: 2018-12-11
湖州帷幄知识产权运营有限公司
View PDF23 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This background segmentation method requires a large amount of manual labeling, requires prior training of the model, and is complex to use.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Vegetable and background segmentation method based on inertial measurement unit and visual information

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0040] 1) To collect continuous shooting pictures, the user rotates 90 degrees around the dishes in the longitude direction to take pictures of the dishes, and the time for taking pictures is 10 seconds.

[0041] 2) Obtain the position and attitude of the camera corresponding to each frame of pictures based on the visual-inertial odometry, and the visual-inertial odometry uses a combination of filtering methods and optimization methods.

[0042] 3) Pass the picture in step 1) through a dense point cloud algorithm and combine the position and attitude to obtain a 3D dense point cloud image of the picture.

[0043] 4) The 3D dense point cloud image is obtained through an image segmentation algorithm to obtain a 3D dense point cloud image of the dish, and the image segmentation algorithm uses height as a distinguishing technical feature.

[0044] 5) Match the picture with the 3D dense point cloud image to select the picture to be recognized and obtain the position and attitude of...

Embodiment 2

[0047] 1) To collect continuous shooting pictures, the user rotates 90 degrees around the dishes in the longitude direction to take pictures of the dishes, and the time for taking pictures is 10 seconds.

[0048] 2) Obtain the position and attitude of the camera corresponding to each frame of pictures based on the visual-inertial odometry, and the visual-inertial odometry uses a combination of filtering methods and optimization methods.

[0049] 3) Pass the picture in step 1) through a dense point cloud algorithm and combine the position and attitude to obtain a 3D dense point cloud image of the picture.

[0050] 4) The 3D dense point cloud image is obtained through an image segmentation algorithm to obtain a 3D dense point cloud image of the dish, and the image segmentation algorithm uses height as a distinguishing technical feature.

[0051] 5) Matching the picture with the 3D dense point cloud to select the picture to be recognized and obtain the position and attitude of th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the technical field of image processing, in particular to a vegetable and background segmentation method based on an inertial measurement unit and visual information, the method comprising the following steps: 1) collecting continuously photographed pictures; 2) obtaining the position and attitude of the camera corresponding to each picture frame base on the visual information and the inertial measurement unit; 3) obtaining a 3D dense point cloud image of that picture in the step 1) through a dense point cloud algorithm and combining the position and the pose. The method utilizes the fusion of the inertial measurement unit and the visual information to obtain the 3D point cloud of the vegetables and the surrounding objects, and uses the 3D point cloud information of the vegetables and the background to segment the vegetables in the image. Compared with a conventional general algorithm, the method has the advantages of no prior training, higher precision and wider use scene.

Description

technical field [0001] The invention relates to the technical field of image processing, in particular to a dish and background segmentation method based on an inertial measurement unit and visual information. Background technique [0002] In the image recognition of dishes, the background pattern will interfere with the recognition. If the dishes can be separated from the background in advance, the robustness of dish recognition can be greatly improved. The existing technology uses deep learning to segment dishes and backgrounds, which requires a lot of manual labeling, requires prior training of the model, and has low computational efficiency and complicated use. [0003] The invention patent with the patent application number CN201610694814.9 discloses a training method for an image foreground and background segmentation network model, including: obtaining the feature vector of the sample image to be trained, wherein the sample image contains foreground annotation informa...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/70G06T7/194
CPCG06T7/194G06T7/70G06T2207/10028
Inventor 王梓里
Owner 湖州帷幄知识产权运营有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products