Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

An Indoor Scene Recognition Method Combining Deep Learning and Sparse Representation

An indoor scene, sparse representation technology, applied in the field of indoor scene recognition and image processing, can solve the problems of small occlusion between large categories, poor recognition effect, complexity, etc., to achieve high practical performance, improve recognition rate, and improve accuracy. Effect

Active Publication Date: 2019-06-21
NANJING UNIV OF POSTS & TELECOMM
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The technical problem to be solved by the present invention is to overcome the deficiencies of the prior art, provide an indoor scene recognition method that combines deep learning and sparse representation, and solve the problems of small differences, occlusion, scale, and angle changes due to the current indoor scene intra-category differences. The current indoor scene recognition is more complex and difficult than the outdoor scene recognition, so the recognition effect is poor, so as to improve the recognition rate and robustness of the indoor scene recognition algorithm.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An Indoor Scene Recognition Method Combining Deep Learning and Sparse Representation
  • An Indoor Scene Recognition Method Combining Deep Learning and Sparse Representation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0025] Embodiments of the present invention will be described below in conjunction with the accompanying drawings.

[0026] Such as figure 1 As shown, the present invention designs an indoor scene recognition method combining deep learning and sparse representation, which includes three major steps of bottom-level feature extraction, middle-level feature construction and classifier design, specifically including the following steps:

[0027] Step A. Randomly select several indoor scene images from the indoor scene library as training samples, and use the remaining indoor scene images in the indoor scene library as test samples.

[0028] Because the present invention is applied to indoor scene image, in order to detect the validity of algorithm, should select the picture in the indoor scene storehouse that is disclosed in the world, has chosen typical MIT-67 indoor scene storehouse in this example, the picture in this scene storehouse and It is not uniform in size, so it is pr...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an indoor scene recognition method combining deep learning and sparse representation. Discriminate and detect object categories with test samples to construct the bottom-level features of each indoor scene image; use the bag-of-words model to combine the bottom-level features and spatial features of each indoor scene image to construct mid-level features; The middle-level features are combined to construct a sparse dictionary; the sparse dictionary is used to sparsely represent the test samples, and the residual is calculated according to the obtained sparse solution and the input test sample, and the object category of the test sample is judged according to the size of the residual ; It will be judged to get the output of the object category it belongs to. The invention can accurately identify indoor scenes, can effectively improve the accuracy and robustness of indoor scene identification, and has high practical performance.

Description

technical field [0001] The invention relates to an indoor scene recognition method combined with deep learning and sparse representation, and belongs to the technical field of image processing technology. Background technique [0002] With the development and popularization of information technology and intelligent robots, scene recognition, as an important research content, has become an important research issue in the field of computer vision and pattern recognition. Scene image classification is the automatic classification of image datasets according to a given set of semantic labels. The scene recognition model is mainly divided into three major blocks: based on low-level features, based on mid-level features, and based on visual vocabulary. The so-called low-level features are to extract the global or block texture, color and other features of the scene image to classify the scene image, such as the research of Valiaya and Szumme et al., but this method of extracting ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62G06N3/08
CPCG06N3/08G06F18/241G06F18/214
Inventor 孙宁朱小英刘佶鑫李晓飞
Owner NANJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products