Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Inspection robot target positioning method based on deep learning framework

An inspection robot and deep learning technology, applied in the direction of neural learning methods, instruments, manipulators, etc., can solve the problem of inability to judge whether the inspection image contains equipment targets, reduce the accuracy of feature point coordinate mapping between images, and the robot cannot recognize the equipment area And other issues

Active Publication Date: 2019-07-05
山东沐点智能科技有限公司
View PDF9 Cites 25 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This process also requires manual calibration of the accuracy of the equipment area, and the two images are greatly affected by the difference in the process of feature matching, because the robot's position, posture, environmental light intensity and other factors during the inspection image acquisition process are different from the template image. There are discrepancies, thus reducing the accuracy of feature point coordinate mapping between images
In addition, image feature registration algorithms, such as Sift feature algorithm, Hog gradient histogram algorithm, Haar corner point algorithm, etc., are generally sensitive to changes in light intensity, that is, when the brightness of the two images differs greatly, especially when the robot inspection When encountering factors such as strong sunlight, equipment reflection, bright sky background and other factors, the registration accuracy will be greatly reduced, which will cause target positioning deviation, resulting in the result that the robot cannot recognize the equipment area
[0011] Once the inspection robot collects the equipment image at the preset position, due to factors such as heading angle error and gimbal angle deviation, the complete equipment image cannot be collected correctly, or the equipment targets all deviate from the image, the existing template image The feature matching method cannot judge whether the inspection image contains equipment targets, so it cannot recognize the equipment status, and cannot tell the inspection robot what caused the wrong result. Therefore, we propose a deep learning framework-based Inspection robot target positioning method to solve the above problems

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Inspection robot target positioning method based on deep learning framework
  • Inspection robot target positioning method based on deep learning framework
  • Inspection robot target positioning method based on deep learning framework

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0037] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention.

[0038] refer to Figure 1-3 , a method for target location of an inspection robot based on a deep learning framework, comprising the following steps:

[0039] S1. Configure the device tree in the robot inspection scene, divide the device type, and perform tree classification for each device. In the instrument device category of the root node, branch nodes such as pointer instruments and digital display instruments can be divided into subcategories. Type instruments can be divided into leaf nodes such as rectangular floating pointers and arc-shaped axis pointers. Each device to be tested has a category.

[0040] S2. Collect and make a sample image of each leaf node t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an inspection robot target positioning method based on a deep learning framework, and the method comprises the following steps: S1, configuring an equipment tree in a robot inspection scene, dividing equipment types, and carrying out tree classification on each piece of equipment; and S2, collecting and manufacturing a sample image of each leaf node type device, and performing sample reproduction on the device image. According to the invention, a function of deploying the deep learning algorithm in the inspection robot visual module is realized. The method and the device are used for realizing accurate positioning of target equipment in acquired real-time images. Various devices in various scenes can be adapted. The problem of robot inspection task errors caused byequipment positioning errors in a previous method is effectively solved, a large amount of manual configuration work is liberated, the efficiency and quality of robot inspection work are improved, theworking intensity of on-site workers is effectively reduced, and the configuration work of existing template images is greatly reduced.

Description

technical field [0001] The invention relates to the technical field of inspection robot image processing, in particular to a method for target positioning of an inspection robot based on a deep learning framework. Background technique [0002] In recent years, deep learning technology has been widely used in the field of computer vision, and has achieved rich results in tasks such as face recognition, intelligent driving, and scene classification. Deep learning is the main direction of current artificial intelligence research. The concept of deep learning originated from the research of artificial neural networks. The multi-layer perceptron with multiple hidden layers is a deep learning structure. Deep learning combines low-level features to form more abstract high-level representation attribute categories or features to discover distributed feature representations of data. Deep learning is relative to simple learning. At present, most learning algorithms such as classifica...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08B25J9/16
CPCG06N3/084B25J9/1605B25J9/163G06V20/10G06N3/044G06N3/045G06F18/24
Inventor 房桦马青岷张世伟朱孟鹏孙自虎李现奇
Owner 山东沐点智能科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products