Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Food image segmentation method and system based on dynamic transformer

An image segmentation and food technology, applied in the fields of computer vision and food computing, can solve problems such as long-tailed distribution and unbalanced distribution of food datasets, so as to improve precision and accuracy, alleviate untargetedness, and improve richness. and overall effect

Pending Publication Date: 2022-06-21
BEIJING TECHNOLOGY AND BUSINESS UNIVERSITY
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

On the other hand, the distribution of food datasets is usually unbalanced, and there is a long-tail distribution problem

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Food image segmentation method and system based on dynamic transformer
  • Food image segmentation method and system based on dynamic transformer
  • Food image segmentation method and system based on dynamic transformer

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0020] like figure 1 As shown, a dynamic transformer-based food image segmentation method provided by an embodiment of the present invention includes the following steps:

[0021] Step S1: Divide the input food image into a series of image blocks of different sizes according to the preset size, construct a plurality of dynamic vision transformer encoder networks of different sizes, and implement feature encoding for a series of image blocks of different division scales; And use the multi-head self-attention mechanism to weight the different scale features of the image block; output multi-layer image feature vectors of different scales;

[0022] Step S2: extracting the image feature vector of the preset layer for fusion to obtain a fused image feature vector;

[0023] Step S3: constructing a multi-level feature aggregation network, performing top-down feature fusion on the fused image feature vector, constructing a multi-layer feature pyramid, and obtaining a multi-scale featu...

Embodiment 2

[0050] like Figure 4 As shown, an embodiment of the present invention provides a dynamic transformer-based food image segmentation system, including the following modules:

[0051] The acquiring image feature vector module 51 is used to divide the input food image into a series of image blocks of different sizes according to the preset size, construct a plurality of dynamic vision transformer encoder networks of different sizes, and realize a series of different division scales. The image block is feature encoded; and the multi-head self-attention mechanism is used to weight the different scale features of the image block; the multi-layer image feature vectors of different scales are output;

[0052] The fusion image feature vector module 52 is used for extracting the image feature vector of the preset layer for fusion to obtain the fused image feature vector;

[0053] A multi-layer feature pyramid module 53 is constructed, which is used to construct a multi-level feature ag...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a food image segmentation method and system based on dynamic transformer, and the method comprises the steps: S1, dividing an input food image into a series of image blocks with different sizes, and inputting the image blocks into a plurality of dynamic visual transformer encoder networks with different sizes; outputting multi-layer image feature vectors of different scales; s2, extracting the image feature vectors of the preset layer for fusion to obtain fused image feature vectors; s3, constructing a multi-stage feature aggregation network, performing top-down feature fusion on the fused image feature vectors, constructing a feature pyramid, and obtaining multi-scale feature fusion vectors; and S4, constructing a segmentation decoder, performing convolution and up-sampling operation on the multi-scale features fused by the feature pyramid, and finally generating a segmentation result with accurate food category boundary segmentation. The method provided by the invention can be adaptive to different picture scales, and richness and integrity of picture semantic information extraction are improved, so that a food segmentation model has generalization and robustness.

Description

technical field [0001] The invention relates to the fields of computer vision and food computing, in particular to a method and system for food image segmentation based on a dynamic transformer. Background technique [0002] Computer vision is emerging as an emerging technology for acquiring and analyzing images of real scenes, which helps intelligent systems perceive the world from images and multidimensional data. The core technology of computer vision has always been related to image analysis and processing, which can classify, detect and segment some specific objects in images. Image semantic segmentation makes pixel-level predictions on a set of object categories; segmentation is often a more demanding task than image classification, which predicts labels for an entire image. From the earliest traditional methods, such as thresholding, k-means clustering, and region growing, to some deep learning models that have achieved good results, such as FCN, PSPNet, and DeepLab ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/10G06T5/50G06N3/04G06N3/08
CPCG06T7/10G06T5/50G06N3/08G06T2207/20221G06T2207/30128G06T2207/20081G06T2207/20084G06N3/045
Inventor 李海生董笑笑王薇王晓川李楠
Owner BEIJING TECHNOLOGY AND BUSINESS UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products