Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for training food image classification model and image classification method

A classification model and food category technology, applied in the field of image recognition, can solve the problems of food image non-rigid structure research, without considering geometric deformation, etc., to improve classification performance, facilitate recognition, and improve performance

Pending Publication Date: 2020-04-21
INST OF COMPUTING TECH CHINESE ACAD OF SCI
View PDF5 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The purpose of the present invention is to solve the problem that the above prior art does not study the non-rigid structure of food images and does not consider its geometric deformation. Therefore, a food image classification model training method based on fusion of multi-scale and multi-view features and image Classification

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for training food image classification model and image classification method
  • Method for training food image classification model and image classification method
  • Method for training food image classification model and image classification method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] In order to make the purpose, technical solution, design method and advantages of the present invention clearer, the present invention will be further described in detail through specific embodiments in conjunction with the accompanying drawings. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0033] In general, according to one embodiment of the present invention, after inputting a food image, a multi-scale fusion architecture is used to extract and fuse three types of features with different scales and differences. In this embodiment, the category and raw material networks are constructed and trained for each scale respectively. Through the category network, the multi-scale semantic distribution of category-oriented information and more abstract deep visual features can be extracted. In order to obtain mid-level attribute features, the specific raw material info...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a method for training a food image classification model and an image classification method, and the method comprises the steps: respectively constructing a multi-scale food rawmaterial neural network and a multi-scale food category neural network, and carrying out the multi-scale division of a target image; respectively carrying out multi-scale fusion on each type of features of the target image, and fusing the three types of features after fusion; and inputting the fused features into a classifier based on the fused features for classification. Complementary fusion ofadvanced food semantic distribution and depth visual features is innovatively proposed, raw material attribute information is further fused with the advanced food semantic distribution and depth visual features, the problems of non-rigid structures and geometric deformation of food images are solved, and recognition of the food images is better facilitated. Moreover, through a multi-scale fusion mode, the defect that the food image does not have the spatial arrangement characteristic is overcome, and the classification performance is improved to the maximum extent.

Description

technical field [0001] The invention relates to the field of image recognition, in particular to food image classification. Background technique [0002] Food is the material basis of people's life, and good eating habits can prevent various chronic diseases (such as obesity, diabetes, etc.). Food image classification has a wide range of practical applications, such as smart bracelets to analyze your diet and nutrition, smart restaurant self-checkout, etc. [0003] However, there are also some difficulties in food image classification: (1) in real life, food images contain background information that has nothing to do with food; (2) food images in the same category may have obvious differences, but they are different from different categories (3) The food image does not have any unique spatial shape, nor does it have a unique appearance that changes with cooking methods, thus lacking a rigid structure. [0004] In order to solve the above problems, some work is based on Fa...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06N3/08
CPCG06N3/08G06F18/254
Inventor 蒋树强刘林虎闵巍庆
Owner INST OF COMPUTING TECH CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products