Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Feature fusion coefficient learnable image semantic segmentation method

A semantic segmentation and fusion coefficient technology, applied in the field of deep learning and computer vision, can solve problems such as helplessness performance, increased calculation amount and calculation time, blindness, etc.

Inactive Publication Date: 2018-03-06
TIANJIN UNIV
View PDF11 Cites 30 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, it is not that the more fusion features, the better. There is a lot of redundant information between semantic features. One-dimensional fusion of more features will increase the amount of calculation and calculation time, and may not help performance improvement; At the same time, there is no clear method for merging the features of which layers
Basically rely on intuition and experience to select layers, which is relatively blind

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Feature fusion coefficient learnable image semantic segmentation method
  • Feature fusion coefficient learnable image semantic segmentation method
  • Feature fusion coefficient learnable image semantic segmentation method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0051]The present invention will be further described below through specific embodiments and accompanying drawings. The embodiments of the present invention are for better understanding of the present invention by those skilled in the art, and do not limit the present invention in any way.

[0052] A method for image semantic segmentation with learnable feature fusion coefficients of the present invention, the specific steps are as follows:

[0053] First, train a deep convolutional network classification model from images to class labels on the image classification dataset:

[0054] We generally use the VGG16 network directly, the pre-trained classification model parameters, to initialize our semantic segmentation network, instead of training it ourselves.

[0055] Secondly, the fully connected layer of the deep convolutional neural network classification model is converted into a convolutional layer to obtain a fully convolutional deep neural network model, which can perfor...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a feature fusion coefficient learnable image semantic segmentation method. The method mainly comprises the following steps: to begin with, training a deep convolution networkclassification model from image to category label in an image classification data set; converting a full connection layer type in the classification model into a convolutional layer type to obtain a full convolution deep neural network model for category prediction at the pixel level; expanding convolutional layer branch, and setting a coefficient for each branch, feature fusion layers being fusedaccording to coefficient proportion, and the coefficient being set in a learnable state; then, carrying out fine-tuning training in an image semantic segmentation data set, and meanwhile, carrying out coefficient learning to obtain a semantic segmentation model; carrying out fine-tuning training and fusion coefficient learning to obtain 1-20 groups of fusion coefficients; and finally, selecting the branch, the coefficient of which is largest, from each group, carrying out final combination, and carrying out fine-tuning training and coefficient learning again to obtain a final semantic segmentation model. The method enables the feature fusion effect to reach a best state.

Description

technical field [0001] The invention relates to the technical fields of deep learning and computer vision, in particular to an image semantic segmentation method with learnable feature fusion coefficients. Background technique [0002] In the past few years, deep convolutional neural networks have achieved great performance improvements in the field of computer vision, including image classification, object detection, pose estimation, and semantic segmentation tasks. At present, the main way to perform semantic segmentation tasks is to use deep neural networks for dense pixel prediction; at the same time, there are also some works that combine conditional random field methods to post-process the semantic segmentation results to make the segmentation results more refined. Previously, most of the semantic segmentation methods using deep neural networks first detected some object candidate boxes from the image, and then combined these boxes with categories as the segmentation r...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06K9/66G06N3/04G06N3/08
CPCG06N3/08G06V20/10G06V20/41G06V30/194G06N3/045G06F18/24
Inventor 韩亚洪于健壮
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products