Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A method for extracting low-level visual information from complex scenes

A technology of visual information and extraction methods, applied in the field of scene visual cognition, can solve the problems of loss, inability to obtain the underlying visual feature value, affecting the reliability of cognitive analysis, etc., to achieve the effect of ensuring accuracy and validity

Active Publication Date: 2022-04-12
BEIHANG UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In view of the defect that the existing low-level visual information eigenvalue method loses the details of the eigenvalue during the extraction, this defect will cause the visual cognition experiment of complex scenes to fail to obtain effective low-level visual eigenvalues ​​during scene analysis, and the lack of this part will be extremely serious. The Reliability of High Impact Cognitive Analysis

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A method for extracting low-level visual information from complex scenes
  • A method for extracting low-level visual information from complex scenes
  • A method for extracting low-level visual information from complex scenes

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] The present invention will be further described below in conjunction with the accompanying drawings and examples. It should be understood that the following examples are intended to facilitate the understanding of the present invention, and have no limiting effect on it.

[0029] In this embodiment, the aircraft cockpit scene is taken as an example, such as figure 1 As shown, due to the high global complexity of complex scenes such as the aircraft cockpit, local details will be lost due to the influence of information noise when the feature value extraction of the underlying visual information is performed. The present invention introduces an improved convolutional neural network structure, such as figure 2 As shown, four kinds of convolution filters are used to form a multi-depth analysis set to perform image semantic segmentation on scene images. Introduce the feature convolution filter to filter and extract the regional semantics of complex scenes, and then use the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the field of scene visual cognition, and in particular relates to a method for extracting underlying visual information of complex scenes. In order to solve the underlying feature value extraction problem of complex scenes. The invention introduces an improved convolutional neural network structure, uses four kinds of convolution filters to form a multi-depth analysis set to perform image semantic segmentation on scene images; introduces feature convolution filters to filter and extract regional semantics of complex scenes, and then extracts the results The transposed convolution is used to segment the semantic region of the scene image; the region segmentation result of the scene semantics is used as the activation bias and substituted into the final underlying visual information feature value extraction network, which can ensure that various types of scene details will not be lost. After the semantic segmentation of the scene area, the present invention can well extract the feature value of the underlying information of the scene required for the cognitive experiment, and can better preserve the details in the complex scene.

Description

technical field [0001] The invention belongs to the field of scene visual cognition, and in particular relates to a method for extracting underlying visual information of complex scenes. The present invention is mainly aimed at complex visual cognition experimental scenes through algorithmic realization, and extracts three types of dominant underlying visual attention resources in the global scene. Background technique [0002] Human's underlying visual processing mechanism and neuroscience research on visual cells point out that when people observe the scene without the influence of existing concepts, human beings tend to allocate visual attention resources to areas with high color saturation in the scene, Areas of high color contrast and areas with edge / directional features. These three types of attention resource content are called overt underlying visual features. In the neural signals of human visual attention, these three types of features occupy most of the informati...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V10/26G06V10/82G06N3/04
CPCG06V10/267G06N3/045
Inventor 杜俊敏顾昊舒
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products