Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Bionic visual image target recognition method fusing dot-line memory information

A visual image and target recognition technology, which is applied in the field of zooming and occlusion target recognition, can solve the problem of low recognition rate of occlusion targets, achieve the effect of dealing with occlusion problems and providing size and position invariance

Active Publication Date: 2019-12-20
CENT SOUTH UNIV
View PDF3 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The technical problem to be solved by the present invention is to provide a bionic visual image target recognition method that integrates point and line memory information to overcome the problem of low recognition rate of occluded targets caused by traditional methods. cells, recognition memory and vector navigation to improve the recognition rate of occluded target images

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Bionic visual image target recognition method fusing dot-line memory information
  • Bionic visual image target recognition method fusing dot-line memory information
  • Bionic visual image target recognition method fusing dot-line memory information

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0029] The overall frame diagram of the method identification process of the present invention is as figure 1 As shown, it specifically includes the following steps:

[0030] Step 1: Construct a vision-driven grid cell set, and perform memory recognition by encoding the movement vector between features. Each layer of grid cell map consists of a matrix of the same size (440×440 pixels), and uses 9 modules and 100 offsets to form a 440×440×100×9 four-dimensional grid cell; figure 2 It shows a grid cell set of 9 modules corresponding to one type of offset, a grid cell set of 10 offsets corresponding to the first type of module, and a grid of 10 offsets corresponding to the fifth type of module collection of grid cells;

[0031] Step 2: Construct the distance cell model and calculate the displacement vector between the positions encoded by the grid cell population vector;

[0032] Step 3: After the grid cell and distance cell models are constructed, image training is performed...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a bionic visual image target recognition method fusing dot-line memory information, and the method comprises the steps of constructing a grid cell set based on visual drive, constructing a distance cell model, and calculating a displacement vector between the positions coded by a grid cell group vector; calculating the response of all sensory neurons to each central concavepixel k through a Gaussian kernel, wherein the response is used for target recognition; calculating the fovea centralis of the current target image by using Gaussian nuclear sensory cells, taking thefeature tag unit with the strongest response as the next jump point, and accumulating the corresponding stimulated identity cells; selecting a next-hop viewpoint, and updating a foveal displacement vector through a distance cell model; and circularly repeating the calculation of the current position during the target identification process, selecting the next-hop viewpoint, and carrying out the vector calculation until the accumulation of a certain stimulated identity cell reaches a threshold value 0.9, and considering the stimulated identity as the finally identified target. The method provided by the invention has a relatively higher recognition rate for the position change, zooming and shielded images.

Description

technical field [0001] The invention relates to the fields of visual perception, recognition memory and biological information, in particular to a grid cell-based scaling and recognition method for occluded targets. Background technique [0002] Scaled and occluded target recognition is a hot issue in the current visual field. When the target image is scaled and blocked by obstacles, traditional machine learning methods are difficult to recognize the target image. For traditional object recognition models that focus on massively parallel processing of low-level features, high-level representations emerge in the post-processing stage. However, visual perception is closely related to eye movement. Human visual perception and eye movement can quickly and continuously scan the current target image. No matter how the target image is scaled or occluded, the human visual perception system can effectively convert the zoomed and occluded image to the target image. The target image i...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V20/40G06F18/214
Inventor 余伶俐金鸣岳周开军
Owner CENT SOUTH UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products