Food volume estimation method and device

A food and volume technology, applied in the field of deep learning, can solve problems such as massive computing resources, high user input requirements, and inappropriateness, and achieve the effect of avoiding complex processes

Inactive Publication Date: 2018-05-15
ZHONGAN INFORMATION TECH SERVICES CO LTD
View PDF7 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method requires high user input, which is very inconvenient to use, and requires a lot of computing ...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Food volume estimation method and device
  • Food volume estimation method and device
  • Food volume estimation method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0061] figure 1 It is a schematic flowchart of the food volume estimation method provided by the embodiment of the present invention. Such as figure 1 As shown, the method for estimating the volume of food provided by the embodiments of the present invention includes the following steps:

[0062] 101. Collect image or video data containing multiple categories of food, and obtain real volume data of the food in the collected image or video data.

[0063] Specifically, images or video data containing various types of food are collected under various backgrounds, scenes, and shooting angles. Various backgrounds include but are not limited to simple backgrounds (such as desktop backgrounds, solid white backgrounds) and complex backgrounds. The background and scenes include ordinary indoor scenes and ordinary outdoor scenes, and a variety of shooting angles include at least a front view and an oblique view with a certain offset. The food image preferably includes a relatively st...

Embodiment 2

[0072] figure 2 It is a schematic flowchart of the food volume estimation method provided by the embodiment of the present invention. image 3 It is a schematic flow chart of the first stage of the food volume estimation method provided by the embodiment of the present invention. Figure 4 It is a schematic flow chart of the second stage of the food volume estimation method provided by the embodiment of the present invention. Such as Figure 2-4 As shown, the method for estimating the food volume provided by the embodiment of the present invention can be divided into two stages: the first stage—the stage of training and obtaining the food area detection model M1 and the volume estimation model M2; the second stage—the stage of estimating the volume of the food to be tested.

[0073] Specifically, the first stage includes the following steps:

[0074] 201. Collect image or video data containing various types of food under various backgrounds, scenes, and shooting angles; an...

Embodiment 3

[0087] Figure 5 It is a structural schematic diagram of the food volume estimation device provided by the embodiment of the present invention. Such as Figure 5 As shown, the food volume estimating device provided by the embodiment of the present invention includes:

[0088] Collection module 31, for collecting the image or video data that comprises multiple categories of food;

[0089] An acquisition module 32, configured to acquire the real volume data of the food in the collected image or video data;

[0090] The model training module 33 is configured to perform training with a preset deep learning neural network model according to the plurality of image or video data and the real volume data to obtain a volume estimation model. Preferably, the model training module 33 is used for training with a preset ResNet, VGG, or DenseNet deep learning neural network model according to multiple image or video data and real volume data to obtain a volume estimation model. Preferab...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a food volume estimation method and device and belongs to the technical field of deep learning. The method comprises the steps that image or video data containing multiple types of food is collected, and true volume data of the food in the collected image or video data is obtained; a preset deep learning neural network model is utilized to perform training according to theimage or video data and the true volume data, and a volume estimation model is obtained; and according to the image or video data of to-be-tested food, the volume estimation model is utilized to obtain a volume estimation result of the to-be-tested food. The volume estimation method is simple and efficient, a user can quickly obtain a predicted volume of the food simply by inputting a food image or a short food video, and the method can be widely applied to network information services like intelligent diet management requiring frequent and rapid food volume estimation.

Description

technical field [0001] The invention relates to the technical field of deep learning, in particular to a food volume estimation method and a device thereof. Background technique [0002] Modern people pay more and more attention to healthy diet, especially the calorie intake of food, and the calorie intake of food is closely related to the volume of food intake. How to use the captured food images to automatically and quickly estimate the volume of food is this kind of smart diet The key to managing applications. [0003] At present, there are relatively few methods for image-based food volume estimation. Most of the existing methods use the user to input multi-view images, then reconstruct the 3D model of the food in the image, and finally calculate the volume of the 3D model. This method has high requirements for user input, is very inconvenient to use, and requires a lot of computing resources in the process of 3D reconstruction, especially not suitable for mobile applic...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/62
CPCG06T2207/10004G06T2207/10016G06T2207/20081G06T2207/20084G06T7/62
Inventor 韩天奇李宏宇
Owner ZHONGAN INFORMATION TECH SERVICES CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products