Image feature extraction method based on weighted depth features

An image feature extraction and depth feature technology, applied in the field of image processing, can solve the problems of complex calculation, poor versatility, low feature robustness, etc., and achieve the effects of good robustness, easy implementation and good versatility

Pending Publication Date: 2019-06-07
GUANGDONG UNIV OF TECH
View PDF4 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The present invention provides an image feature extraction method based on weighted depth features in order to overcome the defects of poo

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image feature extraction method based on weighted depth features
  • Image feature extraction method based on weighted depth features
  • Image feature extraction method based on weighted depth features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0049] Such as figure 1 Shown, a kind of image feature extraction method based on weighted depth feature, described method comprises the following steps:

[0050] S1: Select the existing network model VGG16 image classification network to pre-train on the ImageNet image data set, and remove the softmax layer and fully connected layer in the network model after pre-training to obtain the final VGG16 image classification network model; (The existing model is a classification network model or a location detection network model, selected according to a specific image task. The image data set used for pre-training is an existing common data set or based on an image data set to be extracted.)

[0051] S2: First, directly input the image to be extracted into the VGG16 image classification network model without the softmax layer and fully connected layer for forward calculation, and then extract the convolutional layer in front of all pooling layers in the VGG16 image classification n...

Embodiment 2

[0098] A method for extracting image features based on weighted depth features, said method comprising the following steps:

[0099] S1: In this embodiment, the network model ResNet is selected to perform pre-training on the coco image data set, and the softmax layer and the fully connected layer in the network model after the pre-training are completed are removed to obtain the final ResNet image classification network model; (described The existing model is a classification network model or a location detection network model, which is selected according to the specific image task. The image data set used for pre-training is the existing common data set or based on the image data set to be extracted.)

[0100] S2: First, input the image to be extracted directly into the ResNet image classification network model without the softmax layer and fully connected layer for forward calculation, and then extract the convolutional layer in front of all pooling layers in the ResNet image...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an image feature extraction method based on weighted depth features, and the method comprises the following steps: selecting an existing network model for the pre-training of an image data set, and then removing a network prediction layer in the network model, and obtaining a final network model; inputting the to-be-extracted image into the final network model for forward calculation, and extracting convolution layers in front of all pooling layers in the final network model as a depth feature map of the image; calculating a feature aggregation vector for each convolution layer, and performing extraction and zero-mean normalization processing on each feature aggregation vector to obtain a depth feature vector; carrying out dimensionality reduction on all the depth feature vectors, and distributing the corresponding weight parameters to the depth feature vectors after dimensionality reduction to serve as indexes of each convolution layer; and fusing the depth feature vectors with the weight parameters to obtain depth image features. The method is high in universality, good in obtained feature robustness, high in expression capability, rich in semantic information and easy and convenient to calculate.

Description

technical field [0001] The present invention relates to the field of image processing, and more specifically, to an image feature extraction method based on weighted depth features. Background technique [0002] In recent years, with the explosion of image big data, technologies such as image recognition, retrieval, classification, positioning and detection have made great progress, and among these commonly used technologies, image feature extraction is the key. The quality of the final image task largely depends on the quality of image feature extraction. Therefore, how to extract a better image feature is the main research direction in this field. In traditional image classification, positioning and retrieval tasks, it is generally based on some basic features of the image, such as color, texture, etc. These features cannot solve the problems of scale change, occlusion, illumination, affine transformation, etc. The emergence of image feature extraction algorithms has sol...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/46G06K9/62G06N3/04
Inventor 刘文印王崎康培培徐凯杨振国谈季
Owner GUANGDONG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products