Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Robot visual processing method based on attention mechanism

A technology of robot vision and attention mechanism, applied to instruments, computer components, character and pattern recognition, etc., can solve problems such as immature visual system

Inactive Publication Date: 2015-03-25
SOUTH CHINA UNIV OF TECH
View PDF2 Cites 59 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0014] The currently proposed visual system based on the attention mechanism is still very immature, and is now mainly used in image processing, pattern recognition, video surveillance, etc., and the results achieved are still far from the desired goals.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Robot visual processing method based on attention mechanism
  • Robot visual processing method based on attention mechanism
  • Robot visual processing method based on attention mechanism

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0101] The present invention will be further described in detail below in conjunction with the embodiments and the accompanying drawings, but the embodiments of the present invention are not limited thereto.

[0102] Such as figure 1 , an attention mechanism-based approach to robot vision processing, consists of the following sequential steps:

[0103] S1. Image preprocessing: perform basic processing on the image, including color space conversion, edge extraction, image transformation and image thresholding; the image transformation includes basic scaling, rotation, histogram equalization, and affine transformation of the image;

[0104] S2. Feature extraction: extract five types of feature information of skin color, color, texture, motion and spatial coordinates from the preprocessed image;

[0105] S3. Arbitration decision: For the information obtained by the feature extraction layer, according to a certain arbitration decision strategy, it is selectively distributed to t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a robot visual processing method based on an attention mechanism. The method includes the following steps that image preprocessing is conducted, wherein basic processing is conducted on an image and comprises color spatial switching, edge extracting, image conversion and image thresholding; feature extracting is conducted, wherein five kinds of feature information of the skin color, the color, the texture, the movement and the space coordinates of the preprocessed image are extracted; arbitration decision is conducted, wherein the extracted information is selectively distributed to an upper-layer function application subsystem needing the feature information according to a certain arbitration decision strategy; function application is conducted, wherein corresponding operation is conducted on the feature information which is submitted after arbitration decision to achieve function application, in other words, a direct implementation layer of visual application of a robot comprises five parts of human face detecting, color recognizing, movement detecting and tracking, gesture interaction and the attention mechanism. The robot visual processing method can provide more complete visual information of the human face, the skin color, the gesture and the like for the robot, and the movement detection capability and the tracking and planning capability are achieved.

Description

technical field [0001] The invention relates to a robot vision system, in particular to a robot vision processing method based on an attention mechanism. Background technique [0002] 1. Research on foreign robot vision systems [0003] In 1993, a robot head called Kismet was developed under the leadership of Cynthia Breazeal, a roboticist at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Kismet has vision and hearing functions. Kismet is a baby robot inspired by the way babies and caregivers communicate. Each eye of the Kismet head is equipped with a 5.5mm CCD color camera, and a parallel network composed of eight 50MHz DSPTMS321C40 for image processing and two Motorola 68332-based microcontrollers for the motive device. Kismet has similar abilities and behaviors to babies, such as imitating the way of feedback between children and parents to express emotions and the way babies learn to communicate with others by themselves. [0004]...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62
CPCG06F18/29
Inventor 肖南峰
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products