Unlock instant, AI-driven research and patent intelligence for your innovation.

Implementation method of ultra-small parametric quantity segmentation model

A technology of segmentation model and implementation method, which is applied in the field of computer vision and can solve the problems of unstable channel output and unstable output.

Active Publication Date: 2020-06-09
NANKAI UNIV
View PDF2 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, weight decay can cause instability in the output between channels, leading to suboptimal optimization results
Existing methods introduce attention mechanisms to recalibrate unstable outputs with extra blocks, but this contradicts the purpose of designing extremely lightweight models

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Implementation method of ultra-small parametric quantity segmentation model
  • Implementation method of ultra-small parametric quantity segmentation model
  • Implementation method of ultra-small parametric quantity segmentation model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0024] 1. Lightweight network design for image segmentation

[0025] The lightweight network for image segmentation tasks proposed by the present invention is divided into two parts: a backbone network and a multi-scale fusion module.

[0026] 1.1 Lightweight backbone network

[0027] The backbone network is formed by stacking basic modules. The basic module consists of octave convolution (OctConv) and depthwise convolution that can simultaneously process feature maps of different sizes. OctConv (see structure figure 1 ) can extract features at different frequencies to better capture fine details and overall structure while reducing computational complexity. Specifically, the input feature X is divided into two parts with different resolutions [XH, XL] along the channel dimension. The input features X = [XH, XL] are then processed through OctConv to generate output features with different resolutions Y = [YH, YL], as follows:

[0028] Y H =Conv(X H )+Upsample[Conv(X L ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an implementation method of an ultra-small parametric quantity segmentation model, and belongs to the technical field of computer vision. According to the method, an ultra-lightweight neural network backbone structure is constructed by using convolution capable of simultaneously processing feature maps of various sizes, so that the calculation amount can be reduced while multi-scale feature information is extracted; a feature fusion module is provided to fuse features from different stages in a backbone network structure, and feature information of different scales is fully extracted with low calculation cost, so that a high-quality image segmentation result with high resolution is output. In order to further compress the network parameter quantity, the invention provides a dynamic weight attenuation assisted neural network training strategy, and sparsification constraints of different degrees are performed on different parameters according to features generatedby a current input image in the training process; parameters with zero numerical values in the trained model are eliminated, and the parameter quantity of the lightweight model can be compressed under the condition of keeping the performance unchanged, so that a segmentation model with extremely low parameter quantity is obtained.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and in particular relates to the application of image segmentation using a neural network with an ultra-small parameter amount. Background technique [0002] The powerful representation ability of convolutional neural network (CNN) improves the performance of various visual image segmentation tasks such as salient object detection and semantic segmentation. By building more efficient backbone architectures and using more parameters, CNN-based image segmentation models can achieve further performance improvements. Existing image segmentation models rely on backbone architectures pre-trained on the large-scale dataset ImageNet to extract features. Despite their excellent performance, these models are usually slow and computationally intensive, making them hardly suitable for low-power devices with limited computational capabilities. Existing lightweight backbone network architectures expl...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/11G06N3/04G06N3/08
CPCG06T7/11G06T2207/10004G06T2207/20081G06T2207/20084G06N3/084G06N3/045Y02T10/40
Inventor 程明明高尚华谭永强陆承泽
Owner NANKAI UNIV