The invention relates to a
robot navigation method based on
visual perception and a spatial cognitive neural mechanism. The collected visual images are transformed into visual nodes representing position and orientation angle information of the
robot through a neural network to form a visual
cell; a visual code of the visual
cell is transformed into a spatial description of the environment, and acognitive map, similar to what is formed in the brain when the
mammal freely moves, is constructed; and positioning and navigation of the
robot are realized based on the
cognitive map. According to aneural computation
system of environment
perception and
spatial memory, the robot completes a series of tasks such as
visual processing, spatial representation, self-positioning, and map update, thereby realizing
robotic navigation with high
bionics and strong autonomy in an unknown environment. Compared with the traditional
simultaneous localization and mapping SLAM technology, the robot navigation method based on
visual perception and the spatial cognitive neural mechanism in the invention avoid a series of complex calculations such as manual design visual features and feature point matchingand greatly improve the robustness of the
system to factors such as illumination changes, viewing angle changes,
object motion and the like in the
natural environment.