Multi-mode comprehensive information recognition mobile double-arm robot device, system and method

A comprehensive information and robot technology, which is applied to the multi-mode comprehensive information recognition mobile dual-arm robot device system and field, can solve the problems of difficulty in recognizing the location of equipment, no voice interaction, and high intensity of placement work.

Pending Publication Date: 2020-11-10
谈斯聪
View PDF5 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The inventors of the present application found that the existing technology has the following problems: the effect of autonomous mobile positioning and navigation is not accurate, it is difficult to identify the location of objects and equipment, there is no voice interaction, and the placement operation is intensive but there is no function of autonomous placement of equipment and items, etc.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-mode comprehensive information recognition mobile double-arm robot device, system and method
  • Multi-mode comprehensive information recognition mobile double-arm robot device, system and method
  • Multi-mode comprehensive information recognition mobile double-arm robot device, system and method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0080] Such as figure 1 Shown, a kind of artificial intelligence robot embodiment that is used for campus comprises:

[0081] The main control system 10 of the robot, the module realizes the main control 10 of the robot and the data acquisition devices such as the camera 40 and the information collection and reading device 70, the main control system 10 and the robot arm 20, 30 are carried, the movement planning of the robot arm, Take and move school scene items (classrooms, laboratories, libraries), shopping mall items (supermarkets, shopping malls), medical items (outpatient clinics, wards), factories, and warehouse items. The main control system 10 communicates with the voice device, and the voice interaction between the robot and the user.

[0082] The main control system module 100 of the robot is connected with the voice device 90 , and communicates with the voice module 900 , the robot interacts with the user by voice, collects voice information, and issues voice comma...

Embodiment 2

[0088] On the basis of Embodiment 1, the main control system module 100 of the robot, the visual recognition module 600 and the radar 50 build a map, and the positioning and navigation method, such as figure 2 Shown:

[0089] Set campus scene planning parameters, environment module. Enter the corresponding color, number, letter, text, special logo comprehensive features, etc. Extract the image features corresponding to the logo contour. Features are transformed into input data. Establish the characteristics of the image and input the characteristic value of the detection item. Improve the weight optimizer, quickly train the image, and get the output value. According to the special identification results, accurately identify the target, locate the target position,

[0090]The robot moves to the target position, specifies the navigation target under the main system 10, and sets the parameters frame_id, goal_id, PoseStamped, PositionPose, and the target composition of the Q...

Embodiment 3

[0093] On the basis of Embodiment 1, the main control system module of the robot interacts with the visual recognition module and the robot arms 20, 30, and the target setting, target recognition, target positioning, and action planning methods are as follows: image 3 Shown:

[0094] In the picking and placing area 1000, the visual recognition module is used to create and identify the target (setting the size of the target object, the pose of the target object, and the color of the target object), and create a mathematical model according to the characteristics of equipment, objects, and scenes. Extract color, outline, digital code, two-dimensional code, text, and image features corresponding to special logo images. Classify and identify grabbing targets.

[0095] Convert the feature values ​​of colors, numbers, letters, texts, special identification values, etc. into input data. Establish the mathematical model of the feature of the image, and input the feature value of th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a multi-mode comprehensive information recognition mobile double-arm robot device, system and method. A robot action planning equipment platform is realized by utilizing an artificial intelligence robot multi-scene recognition technology, a multi-mode recognition technology and a voice recognition and positioning navigation mobile technology. According to the multi-mode comprehensive information recognition mobile double-arm robot device, system and method, the artificial intelligence and the robot technology are applied, and the robot node communication principle is combined, so that voice acquisition, voice interaction, voice instruction, voice query, remote and autonomous control, autonomous placement, code scanning query, scanning and reading of biological information, multi-scene article and personnel identification, article and equipment management, radar double-precision position locating and autonomous mobile navigation are realized; a double-arm sorting,counting and article placing integrated robot device is provided; and a robot system is connected with a personnel management system and an article management system. According to the present invention, the capabilities of voice interaction, accurate positioning, autonomous positioning and navigation and autonomous sorting, counting and placing of articles are improved, and the method, the systemand the device are widely applied to schools, business, factories, warehouses and medical scenes.

Description

technical field [0001] The present invention relates to the field of artificial intelligence robots, in particular to a camera-based multi-scene visual recognition technology for campuses, shopping malls, factories, warehouses, hospitals, etc., multi-mode visual recognition technology, recognition of human faces, recognition of biological information, recognition of commercial items, factory , Warehouse items, medical items, equipment, voice recognition, voice interaction, radar positioning, navigation, movement, artificial intelligence robot technology for robot placement actions, widely used in campuses, shopping malls, factories, warehouses, hospitals and other fields. Background technique [0002] With the promotion of artificial intelligence robots in education, business services, production, and medical fields, especially in campuses, shopping malls, factories, warehouses, and medical institutions, etc., we are faced with problems such as a large amount of equipment and...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): B25J9/16
CPCB25J9/1602B25J9/1664B25J9/1605B25J9/1682B25J9/1697
Inventor 不公告发明人
Owner 谈斯聪
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products