Unlock instant, AI-driven research and patent intelligence for your innovation.

A robot visual semantic navigation method, device and system

A technology of robot vision and navigation method, which is applied in the field of devices, robot vision semantic navigation method, system and computer storage medium, and can solve problems such as inability to navigate objects

Active Publication Date: 2022-04-19
WUHAN UNIV OF TECH
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to overcome the above-mentioned technical deficiencies, provide a robot visual semantic navigation method, device, system and computer storage medium, and solve the technical problem in the prior art that the robot cannot navigate objects that are not within the field of vision.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A robot visual semantic navigation method, device and system
  • A robot visual semantic navigation method, device and system
  • A robot visual semantic navigation method, device and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0025] Such as figure 1 As shown, Embodiment 1 of the present invention provides a robot visual semantic navigation method, comprising the following steps:

[0026] S1. Collect the scene images taken by the robot, and at the same time collect the voice commands received by the robot, and establish a scene image set and a voice command set;

[0027] S2. Mark the image features of each scene image in the scene image set, and mark the voice features of each voice command in the voice command set;

[0028] S3. Combining the image features and voice features at the same time to construct a semantic map, obtain a semantic map set, and mark the semantic features of each semantic map in the semantic map set;

[0029] S4, fusing image features, voice features and semantic features at the same time to construct a state vector to obtain a state vector set;

[0030] S5. Mark the action sequence corresponding to each state vector in the state vector set, and use the state vector set as a...

Embodiment 2

[0071] Embodiment 2 of the present invention provides a robot visual semantic navigation device, including a processor and a memory, and a computer program is stored on the memory. When the computer program is executed by the processor, the robot visual semantics provided by Embodiment 1 is realized. navigation method.

[0072] The robot visual semantic navigation device provided by the embodiment of the present invention is used to implement the robot visual semantic navigation method. Therefore, the robot visual semantic navigation device also possesses the technical effects of the robot visual semantic navigation method, and will not be repeated here.

Embodiment 3

[0074] Such as figure 2 As shown, Embodiment 3 of the present invention provides a robot visual semantic navigation system, including the robot visual semantic navigation device 1 provided in Embodiment 2, and also includes a robot 2;

[0075] Described robot 2 comprises visual collection module, voice collection module, communication module and mobile control module;

[0076] The visual collection module is used to collect scene images;

[0077] The voice collection module is used to collect voice commands;

[0078] The communication module is used to send the scene image and voice instructions to the robot visual semantic navigation device 1, and receive the navigation control instructions sent by the robot visual semantic navigation device 1;

[0079] The movement control module is used for performing navigation control on the robot joints according to the navigation control instruction.

[0080] In this embodiment, the robot visual semantic navigation device 1 can be i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the technical field of robot navigation, and discloses a robot visual semantic navigation method, comprising the following steps: establishing a scene image set and a voice command set; marking the image features of each scene image in the scene image set, and marking the voice command Concentrate the voice features of each voice command; combine the image features and voice features at the same time to construct a semantic map, obtain a semantic map set, and mark the semantic features of each semantic map in the semantic map set; integrate image features and voice features at the same time And semantic feature constructs state vector, obtains state vector set; Annotate the action sequence corresponding to each state vector in described state vector set, use described state vector set as training sample to deep reinforcement learning model is trained, obtain navigation model; According to The navigation model performs navigation control on the robot. The invention can realize the navigation of objects not in the field of view of the robot.

Description

technical field [0001] The invention relates to the technical field of robot navigation, in particular to a robot visual semantic navigation method, device, system and computer storage medium. Background technique [0002] Semantic and goal-oriented navigation are challenging tasks, and in everyday life, visual navigation involves multiple problems. First, the robot may not know information about the environment, in which case the robot needs to explore the environment to gain a better understanding of that environment. Second, the target object may not be visible when the robot starts to navigate, or may be out of view during the navigation. Therefore, robots need to learn effective search strategies to find target objects. In the end, the object may be visible, but planning a reasonable path to the object is another problem the robot needs to deal with. [0003] The previous navigation method was map-based navigation SLAM (Simultaneous Localization and Mapping, real-tim...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): B25J9/16
CPCB25J9/16B25J9/1697B25J9/1664
Inventor 宋华珠金宇
Owner WUHAN UNIV OF TECH
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More