CNN (convolutional neural network)-based fMRI (functional magnetic resonance imaging) visual function data object extraction method

A technology of visual function data and target extraction, applied in image data processing, instruments, computing and other directions, can solve problems such as lack of research results, and achieve the effect of improving analytical ability and accuracy

Active Publication Date: 2016-10-26
THE PLA INFORMATION ENG UNIV
View PDF6 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although existing studies have been able to analyze the fMRI visual function data elicited by a certain category of image stimuli to its category, however, for

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • CNN (convolutional neural network)-based fMRI (functional magnetic resonance imaging) visual function data object extraction method
  • CNN (convolutional neural network)-based fMRI (functional magnetic resonance imaging) visual function data object extraction method
  • CNN (convolutional neural network)-based fMRI (functional magnetic resonance imaging) visual function data object extraction method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0025] Embodiment one, see figure 1 As shown, a CNN-based fMRI visual function data target extraction method includes the following steps:

[0026] Step 1. Collect fMRI visual function data of subjects stimulated by natural images in complex scenes, train a deep convolutional neural network model from stimulus images to fMRI visual function data, and a linear model from fMRI visual function data to target categories Mapping model, deep convolutional neural network model includes convolution layer, rectified linear unit layer, maximum pooling layer and fully connected layer;

[0027] Step 2. Add a feedback layer to the deep convolutional neural network model to obtain a convolutional neurofeedback model. The convolutional neurofeedback model is combined with the linear mapping model obtained in step 1 to obtain a category scoring map;

[0028] Step 3. Analyze the fMRI visual function data of the subjects watching the brand-new test images, and use the category score mapping to...

Embodiment 2

[0029] Embodiment two, see figure 1 As shown, a CNN-based fMRI visual function data target extraction method includes the following steps:

[0030] Step 1. Collect fMRI visual function data of subjects stimulated by natural images in complex scenes, train a deep convolutional neural network model from stimulus images to fMRI visual function data, and a linear model from fMRI visual function data to target categories Mapping model, deep convolutional neural network model includes convolution layer, rectified linear unit layer, maximum pooling layer and fully connected layer;

[0031] Step 2. Add a feedback layer to the deep convolutional neural network model to obtain a convolutional neurofeedback model. The convolutional neurofeedback model is compounded with the linear mapping model obtained in step 1 to obtain a category scoring map, which specifically includes the following steps:

[0032] Step 2.1 Stack a feedback layer after each rectified linear unit layer in the deep c...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The present invention relates to a CNN (convolutional neural network)-based fMRI (functional magnetic resonance imaging) visual function data object extraction method. The method includes the following steps that: the fMRI visual function data of an examinee under the stimulation of a complex scene natural image are acquired, a stimulus image-to-fMRI visual function data deep convolution neural network model is trained, and at the same time, an fMRI visual function data-to-focus target category linear mapping model is trained; feedback layers are added into the deep convolution neural network model, the trained linear mapping model is compounded with the deep convolution neural network model, category score mappings are obtained for different target categories in one test image; and the category score mappings are utilized to analyze the fMRI visual function data of the examinee in viewing a new test image, and a target focused by the examinee can be extracted. With the method of the invention adopted, the fMRI visual function data of the examinee which are caused by viewing the complex scene natural image can be analyzed, and the target in the image focused by the examinee can be extracted, and the accuracy of the extraction of the focused target can be improved.

Description

technical field [0001] The invention relates to the technical field of human-computer interaction fMRI visual function data processing, in particular to a method for extracting objects of fMRI visual function data based on CNN. Background technique [0002] The brain is the perception center, thinking center and control center of the human body. There are extremely large and complex information flowing in and out at all times. It ensures the normal operation of the human body with efficient information transmission and processing methods. Understanding this information processing, which is unmatched by modern science and technology, has been a long-standing research goal of the emerging field of neuroinformatics. Among them, the visual information of the brain is the most important way for humans to obtain external information, and its interpretation method is the focus of neuroscience research. In recent years, neuroimaging technology has made great progress, with the emer...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/00
CPCG06T7/0012G06T2207/10088G06T2207/20081
Inventor 王林元乔凯张驰胡逸聪徐一夫陈健曾磊王彪童莉
Owner THE PLA INFORMATION ENG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products