Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Scene recognition method based on convolution multi-features and deep random forest

A random forest and scene recognition technology, applied in character and pattern recognition, neural learning methods, computer components, etc., can solve problems such as large number of samples, dependence on parameter adjustment, and unsatisfactory results

Active Publication Date: 2018-06-01
ZHEJIANG NORMAL UNIVERSITY
View PDF5 Cites 19 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this method also has problems: 1) The recognition of scene images is a top-down process, which needs to consider both global features and local features, and the effect of only using convolutional neural network for scene image recognition is not ideal ; 2) A large number of samples are required for training, which cannot be used for small-scale data tasks, and the training time is very long; 3) The structure of the deep neural network is very complex, heavily dependent on parameter tuning, and contains a large number of hyperparameters, which is not conducive to system stability, and due to Convolutional neural networks are not conducive to analysis due to many different choices such as convolutional layer structure

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Scene recognition method based on convolution multi-features and deep random forest
  • Scene recognition method based on convolution multi-features and deep random forest
  • Scene recognition method based on convolution multi-features and deep random forest

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0044] The implementation of the present invention will be described in detail below in conjunction with the accompanying drawings and examples, so as to fully understand and implement the process of how to apply technical means to solve technical problems and achieve technical effects in the present invention.

[0045] The scene recognition method based on convolutional multi-features and deep random forest in the embodiment of the present application is used for scene recognition. The scene recognition described in the embodiment of the present application mainly refers to extracting multiple types of features by using a convolutional neural network, and inputting them into a deep random forest for scene recognition.

[0046] as attached figure 1 As shown, the specific implementation of the scene recognition method based on convolutional multi-features and deep random forest is as follows:

[0047] Step S110, constructing a VGG-19 convolutional neural network, and training ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a scene recognition method based on convolution multi-features and deep random forest. A sparsely coded spatial pyramid matching method and a Fisher vector are applied to a convolutional neural network (CNN) for feature extraction and the extracted features are applied to the scene recognition method for the deep random forest to improve scene recognition accuracy. The method includes training a training image by using the CNN, performing Fisher vector encoding on the output of the last convolutional layer in the CNN, deconvolving the output of the CNN, calculating theimage feature point distribution of the output at different resolutions by using the spatial pyramid matching method to form a multi-scale spatial local feature, and then using the deep random forestfor classification, thereby improving scene recognition accuracy.

Description

technical field [0001] The invention relates to computer pattern recognition technology, in particular to a scene recognition method based on convolutional multi-features and deep random forest. Background technique [0002] In digital image and digital video data, there is a large amount of visual information, and computer vision technology is a technology that uses computers to intelligently extract and analyze the useful information contained in these visual information. With the rapid development of computer theory, applications and applications, the ability of computers to process images and videos has been greatly improved, making computational vision technology a key research direction in the fields of computer and artificial intelligence. [0003] The recognition and analysis of scene images is an important topic in the field of computer pattern recognition and an important branch of the field of image recognition. Scene recognition is also involved in the field of ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06K9/46G06N3/04G06N3/08
CPCG06N3/08G06V10/513G06V10/44G06N3/045G06F18/285
Inventor 熊继平叶童王妃
Owner ZHEJIANG NORMAL UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products