Supercharge Your Innovation With Domain-Expert AI Agents!

Image processing method and device

An image processing and image technology, applied in the field of robotics, can solve the problems of time-consuming data collection, large amount of data, physical loss of robots, etc.

Active Publication Date: 2019-08-09
SHENZHEN SENSETIME TECH CO LTD
View PDF15 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The disadvantage of this method is that the amount of data required is still large, and a total of about 2,500 expert teaching examples have been collected
This method is very time-consuming to collect data, may cause physical loss of the robot, and the scenes and actions taught by experts are limited, and the robot cannot make appropriate predictions for unfamiliar scenes.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image processing method and device
  • Image processing method and device
  • Image processing method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0062] Such as figure 1 As shown, the image processing method may specifically include the following:

[0063] Step 101: Obtain a training sample set, the training sample set includes at least one color sample image, and at least one depth sample image corresponding to the color sample image;

[0064] Step 102: Input the training sample set into an image processing model for processing, the image processing model includes a color image processing model and a depth image processing model, and the color image processing model is used to process the color sample image to obtain a color image Reconstructing an image, the depth image processing model is used to process the depth image to obtain a depth reconstruction image;

[0065] Step 103: Determine a first loss parameter of the image processing model based on the color sample image, the color reconstructed image, the depth sample image, and the depth reconstructed image, and adjust based on the first loss parameter The color ...

Embodiment 2

[0105] In order to better reflect the purpose of this application, on the basis of Embodiment 1 of this application, further illustrations are made, such as Figure 4 As shown, after obtaining the trained color image processing model, the image processing method also includes:

[0106] Step 401: Input the color sample image into the first encoding module of the trained color image processing model, and output the encoded second color sample image;

[0107] Here, the trained color image processing model also includes: a first encoding module and a first decoding module.

[0108] Step 402: Determine a fifth loss parameter of the control model based on the encoded second color sample image and the state label of the robot;

[0109] In practical applications, the training sample set further includes: at least one state label of the robot corresponding to the color sample image.

[0110] Specifically, the image acquisition device collects the color sample image and the depth samp...

Embodiment 3

[0122] In order to better reflect the purpose of this application, on the basis of Embodiment 1 of this application, further illustrations are made, such as Figure 5 As shown, after obtaining the trained color image processing model and the trained control model, the image processing method also includes:

[0123] Step 501: Input the color sample image into the first encoding module of the trained color image processing model, and output the encoded second color sample image;

[0124] Here, the trained color image processing model also includes: a first encoding module and a first decoding module.

[0125] Step 502: Input the depth sample image into the second encoding module of the trained depth image processing model, and output the encoded second depth sample image;

[0126] Here, the trained depth image processing model also includes: a second encoding module and a second decoding module.

[0127] In practical applications, the first encoding module and the second encod...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention discloses an image processing method and device, and the method comprises the steps: obtaining a training sample set which comprises at least one color sample image andat least one depth sample image corresponding to the color sample image; inputting the training sample set into an image processing model for processing, wherein the image processing model comprisesa color image processing model and a depth image processing model, the color image processing model is used for processing the color sample image to obtain a color reconstructed image, and the depth image processing model is used for processing the depth image to obtain a depth reconstructed image; and determining a first loss parameter of the image processing model based on the color sample image, the color reconstructed image, the depth sample image and the depth reconstructed image, and adjusting the color image processing model based on the first loss parameter to obtain a trained color image processing model.

Description

technical field [0001] The present application relates to robot technology, and in particular to an image processing method and device. Background technique [0002] The research of robot learning is mainly divided into two directions: supervised learning and reinforcement learning, among which supervised learning has subdivided directions such as imitation learning and self-supervised learning. The imitation learning method trains the network model by collecting the teaching information of experts. The disadvantage is that a large amount of expert teaching information is required, and it is usually assumed that the environment is known and limited, and the effect is not good for open and complex scenes in reality; self-supervised learning The labeling data is collected through robot trial and error experiments. The disadvantage is that the success rate of trial and error experiments is low, the data collection is very inefficient, and trial and error in a real environment w...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T5/50
CPCG06T5/50G06T2207/10024
Inventor 吴华栋张展鹏成慧杨凯
Owner SHENZHEN SENSETIME TECH CO LTD
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More