Food volume estimation method based on single depth map deep learning view synthesis

A technology of view synthesis and deep learning, applied in computing, image analysis, image enhancement, etc., can solve problems that affect the accuracy of feature point matching and 3D reconstruction, complex operation methods, and failure of 3D reconstruction, so as to solve health problems and improve The effect of accuracy

Pending Publication Date: 2022-05-31
北京精培医学研究院
View PDF2 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] For volume estimation, previous studies mainly focus on using model-based or stereo-based methods, which rely on manual intervention, or require users to capture multiple frames from different viewpoints, and the operation method is relatively complicated; in addition, various methods based on Computer vision technology is used to solve the problem of quantifying food portions. Food volume measurement technology can be divided into two categories: model-based and stereo-based. Although these technologies have shown good performance, they still have the following problems:
[0005] (2) Stereo-based methods require participants to take multiple food images from different perspectives, and the operation method is complicated;
[0006] (3) Other methods require feature point extraction and matching
For food objects with smooth surfaces or inconspicuous textures, feature points cannot be effectively extracted, resulting in failure of 3D reconstruction;
[0007] (4) When images are taken from different viewing angles, the reflected light of the object will change, which affects the accuracy of feature point matching and 3D reconstruction;

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Food volume estimation method based on single depth map deep learning view synthesis
  • Food volume estimation method based on single depth map deep learning view synthesis
  • Food volume estimation method based on single depth map deep learning view synthesis

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0052] Embodiment 1: the method for estimating food volume based on single-depth map deep learning view synthesis, such as Figure 1 to Figure 5 shown, including the following steps:

[0053] 1) Place each object item at the origin, and capture depth images of the object item from different perspectives through four motion modes: azimuth rotation, elevation rotation, height adjustment, and center movement;

[0054] 2) Segment and classify the captured images, randomly render initial and corresponding depth images of object items captured from opposite perspectives using multiple external camera parameters, and render the captured depth images as a training dataset;

[0055] 3) View synthesis methods based on deep neural networks combine unseen viewpoints and use unseen object items to predict results using input images;

[0056] 4) Registering the camera coordinates of the initial depth image and the relative depth image into the same world coordinates to obtain a complete 3D...

Embodiment 2

[0099] Example 2: Application of a method for estimating food volume based on deep learning view synthesis of a single depth map, and applying the method to the estimation of the nutritional content of dietary intake.

[0100] In this example, if Image 6 As shown, after the volume of each meal is obtained by the volume estimation method, the content of each ingredient in the meal can be obtained according to the database, thereby helping users understand their eating behavior.

[0101] Working principle: Through the integrated method of depth sensing technology and deep learning view synthesis, a single depth image can be obtained at any convenient angle; by adopting the network structure and point cloud completion algorithm, the 3D structure of various shapes can be learned implicitly, and It can restore the occluded part of the food and improve the accuracy of volume estimation; applying the food volume estimation method to the nutrient content estimation of dietary intake ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a food volume estimation method based on single depth map deep learning view synthesis, and relates to the technical field of diet evaluation, and the method is characterized in that the method comprises the following steps: 1) placing each object item at an original point, and capturing object item depth images of different visual angles; 2) rendering the captured depth image into a training data set; 3) using the invisible view angle and the invisible object item to predict a result of using the input image by using the invisible view angle and the invisible object item based on a view synthesis method of the deep neural network; 4) obtaining a complete three-dimensional point cloud of the target object item; 5) preprocessing the depth image of the object item; 6) further optimizing the preprocessed point cloud by adopting an ICP (Inductively Coupled Plasma) algorithm; and 7) carrying out grid division on the object item by adopting an Alpha shape method and forming a three-dimensional grid so as to obtain the volume of the object item. Based on an integration method of a depth sensing technology and deep learning view synthesis, a single depth image can be obtained at any convenient angle, so that accurate food volume estimation is realized.

Description

technical field [0001] The present invention relates to the technical field of diet evaluation, and more specifically, it relates to a food volume estimation method based on deep learning view synthesis of a single depth map. Background technique [0002] An objective dietary assessment system can help users understand their eating behavior and enable targeted interventions to address underlying health problems. To accurately quantify dietary intake, the weight or volume of food needs to be measured. [0003] For volume estimation, previous studies mainly focus on using model-based or stereo-based methods, which rely on manual intervention, or require users to capture multiple frames from different viewpoints, and the operation method is relatively complicated; in addition, various methods based on Computer vision technology is used to solve the problem of quantifying food portions. Food volume measurement technology can be divided into two categories: model-based and stere...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/62G06T17/20G06T19/20
CPCG06T7/62G06T17/20G06T19/20G06T2207/10028G06T2207/20081G06T2207/20084G06T2207/20221G06T2219/2016
Inventor 赖建强王烨朱成博
Owner 北京精培医学研究院
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products