Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Retinal neuron coding method based on convolutional neural network

A technology of convolutional neural network and coding method, applied in the field of coding of single ganglion cells in the retina, can solve problems such as unclear CNN learning and poor natural scene coding ability

Pending Publication Date: 2021-02-05
ZHEJIANG LAB
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

To this end, visual encoding models based on convolutional neural networks (CNN) have been developed, but current encoding models perform poorly in encoding natural scenes, and it is unclear what CNNs learn from neural networks in visual systems

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Retinal neuron coding method based on convolutional neural network
  • Retinal neuron coding method based on convolutional neural network
  • Retinal neuron coding method based on convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0060] In the LNLN model, the calculation components such as temporal filters, spatial receptive fields, nonlinear functions, and connection weights of each layer are known. After a white noise stimulus is given, the corresponding simulated impulse response is obtained. Using such a simulation It is found that the CNN model trained by the data can not only predict the response well, but also find that the first layer of convolution kernel just corresponds to the spatiotemporal receptive field of the first layer of subunits in the simulation model.

[0061] The CNN model trained with real physiological data can also restore the receptive field of bipolar cells in real data. Specifically, the CNN model can identify the spatiotemporal receptive field of the subunit in the ganglion cell model through the input stimulus image and the output response pulse.

Embodiment 2

[0063] According to the CNN model of the specified ganglion cells (training set) trained according to specific stimuli (white noise or natural images), when a new visual stimulus with the same distribution (test set, which belongs to the same type as the data in the training set) is input, CNN models can predict neural responses very well. For example, a CNN model trained using the white noise data of ganglion cell A can predict the response of ganglion cell A to new white noise stimuli; similarly, a CNN model trained using natural image data of ganglion cell B can predict Predict the response of this ganglion cell B to novel natural image stimuli.

Embodiment 3

[0065] Given some ganglion cell data and a trained CNN model, you can consider studying the migration learning ability or generalization ability of the CNN model. That is, the CNN model trained with the data of one ganglion cell predicts the response of another cell. For example, the CNN model trained by using the response of neuron C to white noise stimulation can not only accurately predict the response of neuron C to new white noise stimulation, but also predict the response of neuron D to new white noise stimulation.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a retinal neuron coding method based on a convolutional neural network. The method comprises the following steps: S1, obtaining ganglion cell stimulation data and correspondingresponse data; S2, constructing a CNN model for training, including the following steps: S21, inputting ganglion cell stimulation data into a first convolution layer; S22, inputting the output of thefirst convolution layer into a second convolution layer; S23, inputting the output of the second convolution layer into a full connection layer; S24, comparing the output with the response data, andoptimizing the output of the CNN model; and S3, predicting response data of ganglion cells through the trained CNN model; the CNN model is selected because the CNN model has strong nonlinear computingpower, through experiments, when the convolution layer of the CNN model is 2, the effect is the best, only one output neuron exists in a full connection layer, and information is corrected through anactivation function.

Description

technical field [0001] The invention relates to the technical field of visual coding of neural networks, in particular to a coding technology of a single retinal ganglion cell based on a convolutional neural network. Background technique [0002] Visual function is an extremely important function in the human brain. 70% of the information we get every day comes from vision. Understanding the working mechanism of the visual system is of great significance to neuroscience and machine vision. As the input of the visual system, understanding the working mechanism of the retina is the basis for understanding the working mechanism of the visual system. Rich and dynamic visual stimuli are encoded within the nervous system through the computational components of the retinal multilayer neural network and induce spiking electrical signals behind ganglion cells (GC, ganglion cell). To this end, visual encoding models based on convolutional neural networks (CNN) have been developed, b...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/46G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V10/449G06N3/048G06N3/045G06F18/241
Inventor 余肇飞贾杉杉郑雅菁刘健
Owner ZHEJIANG LAB
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products