Image fine-grained recognition method based on multi-scale feature fusion

A multi-scale feature and recognition method technology, applied in character and pattern recognition, instruments, biological neural network models, etc., can solve problems such as poor real-time requirements

Pending Publication Date: 2019-08-06
SOUTHEAST UNIV
View PDF5 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] For the fine-grained recognition tasks of image target subclasses, some existing methods fail to combine the local features of the bottom layer well, and there are situations where the real-time requirements are poorly met. The present invention is based on the bilinear convolutional neural network. Combined with the idea of ​​multi-scale feature pyramid, the present invention provides a fine-grained image recognition method based on multi-scal

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image fine-grained recognition method based on multi-scale feature fusion
  • Image fine-grained recognition method based on multi-scale feature fusion
  • Image fine-grained recognition method based on multi-scale feature fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] Below in conjunction with accompanying drawing and specific embodiment the present invention is described in further detail:

[0036] The present invention provides an image fine-grained recognition method based on multi-scale feature fusion. The present invention uses a feature pyramid method to fuse multi-layer bilinear features, independently predicts fine-grained recognition results at each layer, and finally votes the prediction results of each layer. , to get the final fine-grained recognition result. The present invention is tested on the cigarette fine-grained recognition data set Cigarette67-2018 and the public bird fine-grained recognition data set CUB200-2011 proposed by the laboratory. With improvement, the accuracies on the above two test sets are 85.4% and 95.95%, respectively. On the other hand, the real-time reasoning speed of the present invention on a single-core CPU can meet the real-time requirement.

[0037] Taking the public data set CUB200-2011 ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a retinal vessel segmentation method based on dense convolution and depth separable convolution. The method comprises the following steps: preprocessing an original image of aretinal vessel map; carrying out data enhancement on the data set; constructing a full convolutional neural network based on combination of dense convolution and depth separable convolution, and training the training set by using a loss function with weight; and testing and obtaining a final segmentation result graph. Encoding-decoding symmetric network is used as a backbone network. The additiondepth can separate convolution, so that the model parameter quantity is greatly reduced; dense convolution blocks are used, all layers are connected on a channel, information transmission is enhanced,and characteristic values of all scales are effectively utilized; in the image preprocessing process, self-adaptive gamma correction is carried out on the image, different gamma values are used for correcting different feature areas, and background noise is weakened while the feature contrast ratio is improved; during training, a loss function with weight is used to enhance the proportion of theblood vessel to be segmented during training.

Description

technical field [0001] The invention belongs to the fields of computer vision, artificial intelligence, and multimedia signal processing, and in particular relates to an image fine-grained recognition method based on multi-scale feature fusion. Background technique [0002] Image fine-grained recognition is a subject of research on image subclass recognition and classification. With the rapid development of deep learning and artificial intelligence technology, the problem of image fine-grained recognition as a basic computer vision subject has also made great progress. The problem of fine-grained recognition is relative to coarse-grained recognition. Coarse-grained recognition refers to the recognition and classification of large categories of objects in the traditional sense. Fine-grained recognition refers to the identification of sub-categories, such as the CUB200-2011 data proposed by the California Institute of Technology. Concentrated identification of 200 species of ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62G06N3/04
CPCG06N3/045G06F18/2451G06F18/254G06F18/253
Inventor 杨绿溪邓亭强廖如天张旭帆赵清玄
Owner SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products