Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Complex scene bottom-layer visual information extraction method

A technology for visual information and complex scenes, applied in the field of underlying visual information extraction in complex scenes, can solve problems such as loss, inability to obtain underlying visual feature values, and affect the reliability of cognitive analysis, so as to achieve the effect of ensuring effectiveness and accuracy

Active Publication Date: 2020-12-04
BEIHANG UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In view of the defect that the existing low-level visual information eigenvalue method loses the details of the eigenvalue during the extraction, this defect will cause the visual cognition experiment of complex scenes to fail to obtain effective low-level visual eigenvalues ​​during scene analysis, and the lack of this part will be extremely serious. The Reliability of High Impact Cognitive Analysis

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Complex scene bottom-layer visual information extraction method
  • Complex scene bottom-layer visual information extraction method
  • Complex scene bottom-layer visual information extraction method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] The present invention will be further described below in conjunction with the accompanying drawings and examples. It should be understood that the following examples are intended to facilitate the understanding of the present invention, and have no limiting effect on it.

[0029] In this embodiment, the aircraft cockpit scene is taken as an example, such as figure 1 As shown, due to the high global complexity of complex scenes such as the aircraft cockpit, local details will be lost due to the influence of information noise when the feature value extraction of the underlying visual information is performed. The present invention introduces an improved convolutional neural network structure, such as figure 2 As shown, four kinds of convolution filters are used to form a multi-depth analysis set to perform image semantic segmentation on scene images. Introduce the feature convolution filter to filter and extract the regional semantics of complex scenes, and then use the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the field of scene visual cognition, and particularly relates to a complex scene bottom-layer visual information extraction method. The problem of bottom-layer feature value extraction of a complex scene is solved. According to the method, an improved convolutional neural network structure is introduced, and four convolutional filters are utilized to form a multi-depth analysis set to perform image semantic segmentation on a scene image. A feature convolution filter is introduced to screen and extract regional semantics of a complex scene, and the semantic region segmentation is performed on a scene image by using transposed convolution on an extraction result. The region segmentation result of the scene semantics is substituted into the final bottom visual information feature value extraction network as an activation bias, so that various types of scene details can be ensured not to be lost. After semantic segmentation of the scene area, the scene bottom-layerinformation feature value required by the cognitive experiment can be well extracted, and details in a complex scene can be well reserved.

Description

technical field [0001] The invention belongs to the field of scene visual cognition, and in particular relates to a method for extracting underlying visual information of complex scenes. The present invention is mainly aimed at complex visual cognition experimental scenes through algorithmic realization, and extracts three types of dominant underlying visual attention resources in the global scene. Background technique [0002] Human's underlying visual processing mechanism and neuroscience research on visual cells point out that when people observe the scene without the influence of existing concepts, human beings tend to allocate visual attention resources to areas with high color saturation in the scene, Areas of high color contrast and areas with edge / directional features. These three types of attention resource content are called overt underlying visual features. In the neural signals of human visual attention, these three types of features occupy most of the informati...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/34G06N3/04
CPCG06V10/267G06N3/045
Inventor 杜俊敏顾昊舒
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products