Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image feature extraction and training method based on three-dimensional convolutional neural network

An image feature extraction and three-dimensional convolution technology, applied in the field of image recognition and deep learning, can solve the problems of large amount of calculation, low recognition, loss of information, etc., to improve the accuracy, improve the recognition rate, and optimize the training effect.

Active Publication Date: 2018-10-30
SHAANXI NORMAL UNIV
View PDF5 Cites 63 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

For these three-dimensional images, the current solution is to average all images in a certain dimension to obtain a two-dimensional image, and then use a two-dimensional deep learning algorithm to identify it. This method combines all images in a certain dimension Averaging is performed, so a lot of information is lost, and not all features can be effectively extracted
Another method is to regard a certain dimension as a channel of a two-dimensional convolutional neural network, that is, how many slices the image has in this dimension, then how many channels there are, and then use the same two-dimensional convolutional neural network Algorithm for recognition, although this method seems to have no loss of information, it turns the three-dimensional image into an isolated two-dimensional image, extracts features from the two-dimensional image, and extracts two-dimensional features, without considering the characteristics of the two-dimensional image The relevance in the third dimension, and the amount of calculation is large, so it does not conform to the essence of the three-dimensional image, the information loss in the recognition process is large, and the recognition degree is low

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image feature extraction and training method based on three-dimensional convolutional neural network
  • Image feature extraction and training method based on three-dimensional convolutional neural network
  • Image feature extraction and training method based on three-dimensional convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0039] The present invention will be further described in detail below in conjunction with specific embodiments, which are for explanation rather than limitation of the present invention.

[0040] The present invention is an image feature extraction and training method based on a three-dimensional convolutional neural network. The method constructs a three-dimensional convolutional neural network model and a corresponding training method. It is different from the previous two-dimensional convolutional neural network method. The image needs to average or divide the information of a certain dimension in 3D into many channels, so 3D features cannot be extracted effectively. This method directly uses 3D convolution to extract 3D features, and uses proportional equalization when training sample models The standardized small-batch sample input mechanism estimates the gradient, avoiding the disadvantage that some sample categories cannot be effectively identified due to random input samp...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image feature extraction and training method based on a three-dimensional convolutional neural network. The method comprises the steps that 1, dimension normalization processing is performed on input images for feature extraction; 2, the three-dimensional convolutional neural network comprising a convolutional layer, an activation layer, a pooling layer, a full-connection layer and an output layer is constructed; and 3, the constructed three-dimensional convolutional neural network is trained to obtain an optimized three-dimensional convolutional neural network, feature extraction is performed on the input images, and classified recognition of the input images is completed. According to the method, feature extraction and recognition of the three-dimensional images are performed through the three-dimensional convolutional neural network, the three-dimensional convolutional neural network directly performs convolution on the three-dimensional images, three-dimensional spatial features of the images are extracted, feature modes of the three-dimensional images can be expressed more effectively, and therefore the purpose of classified recognition of the imagesis achieved.

Description

Technical field [0001] The invention belongs to the field of image recognition and deep learning, relates to three-dimensional image feature extraction and recognition, and is specifically an image feature extraction and training method based on a three-dimensional convolutional neural network. Background technique [0002] Image recognition is a technology in which computers process, analyze, and understand images to identify targets and objects in various patterns. It has been applied to industrial security, life, education and other aspects. Image recognition is an important field of artificial intelligence. In order to teach computers to recognize images like humans, people have proposed many image recognition methods. The traditional recognition process includes image preprocessing, image segmentation, feature extraction and judgment matching. Therefore, there are a large number of different algorithms in each intermediate step, and each intermediate step affects the final...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06K9/46G06K9/62
CPCG06V10/462G06N3/045G06F18/214G06F18/24
Inventor 葛宝李雅迪
Owner SHAANXI NORMAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products