Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image attention semantic target segmentation method based on fMRI visual function data DeconvNet

A technology of visual functional data and semantic segmentation, which is applied in the direction of instruments, character and pattern recognition, computer components, etc., can solve problems such as lack of research results, and achieve the effect of improving analysis ability and accuracy

Active Publication Date: 2016-10-12
THE PLA INFORMATION ENG UNIV
View PDF2 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although the existing research has been able to analyze the fMRI visual function data triggered by a certain category of image stimuli to its category, there is no corresponding research result on the extraction of the target semantics that the subjects are concerned about.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image attention semantic target segmentation method based on fMRI visual function data DeconvNet

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0017] Embodiment one, see figure 1 As shown, an image attention target semantic segmentation method based on fMRI visual function data DeconvNet includes the following steps:

[0018] Step 1. Collect fMRI visual function data of subjects stimulated by natural scene images, train a deep convolutional neural network model from stimulus images to fMRI visual function data, and a linear mapping from fMRI visual function data to target categories Model, which maps the trained deep convolutional neural network model to a linear mapping model to optimize the network model;

[0019] Step 2. Construct a deconvolution deep network model symmetrical to the deep convolutional neural network optimized by the network model, and optimize the deconvolution deep network model by using the fMRI visual function data and the semantic segmentation results corresponding to the stimulus images. Obtain the mapping from fMRI visual function data to pixel-by-pixel semantic segmentation results, and o...

Embodiment 2

[0023] Embodiment two, see figure 1 As shown, an image attention target semantic segmentation method based on fMRI visual function data DeconvNet includes the following steps:

[0024] Step 1. Collect fMRI visual function data of subjects stimulated by natural scene images, train a deep convolutional neural network model from stimulus images to fMRI visual function data, and a linear mapping from fMRI visual function data to target categories Model, which maps the trained deep convolutional neural network model to a linear mapping model to optimize the network model;

[0025] Step 2. Construct a deconvolution deep network model symmetrical to the deep convolutional neural network optimized by the network model, and optimize the deconvolution deep network model by using the fMRI visual function data and the semantic segmentation results corresponding to the stimulus images. Obtain the mapping from fMRI visual function data to pixel-by-pixel semantic segmentation results, and o...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an image attention semantic target segmentation method based on fMRI visual function data DeconvNet. A depth convolution neural network model is trained by fMRI visual functional data collected under stimulation of a natural scene image by a subject and the data are mapped to attention target category tags for model optimization. A depth network model which is symmetrical to the optimized depth convolution neural network is constructed. Parameters of the depth network model are optimized by means of the fMRI visual function data and a semantic segmentation result corresponding to a stimulation image to obtain the mapping of the fMRI visual function data to pixel-by-pixel semantic segmentation results. The fMRI visual function data of a test image viewed by the subject are collected, a subject attention target category and pixel-by-pixel semantic segmentation result is determined, and an attention target region and corresponding target semantics are segmented. The method analyzes the fMRI visual function data which are triggered when the natural scene image is viewed by the subject, extracts all the target categories in the stimulation image and obtains the semantic segmentation result, and improves the accuracy of attention target extraction.

Description

technical field [0001] The present invention relates to the technical field of human-computer interaction fMRI visual function data processing, in particular to a method for semantically segmenting image focus objects based on fMRI visual function data DeconvNet. Background technique [0002] Visual information acquisition is the most important way for humans to obtain external information, and its interpretation method is also the focus of neuroscience research. For a long time, a large number of researchers have tried to simulate and expand human visual function with computers from various angles. In the field of neuroscience, there has always been a very attractive question, that is, why the human brain can use very little energy to complete advanced visual tasks such as object recognition and scene understanding. In recent years, neuroimaging technology has made great progress. Functional Magnetic Resonance Imaging (fMRI) has become the main neuroimaging method for stud...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62
CPCG06F18/2163G06F18/214G06F18/29
Inventor 闫镔王林元乔凯童莉曾颖徐一夫贺文颉张驰高辉
Owner THE PLA INFORMATION ENG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products