Unlock instant, AI-driven research and patent intelligence for your innovation.

Data acquisition method based on deep learning and multi-eye vision in digital twin environment

A deep learning and data acquisition technology, applied in neural learning methods, character and pattern recognition, biological neural network models, etc., can solve problems such as lack of universality, unstable recognition, and complex data installation of sensor acquisition equipment

Active Publication Date: 2020-07-31
ZHENGZHOU UNIVERSITY OF LIGHT INDUSTRY
View PDF9 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0010] According to the above-mentioned prior art, the technical problem to be solved by the present invention is that the installation of sensor data collection equipment is complicated and not universal, the algorithm of directly identifying and locating target objects using machine vision methods is complex, and the algorithm of indirect identifying and locating target objects is indirect by means of marker points When using the deep learning algorithm to recognize the unstable problem when the background changes, and the positioning error problem caused by the calibration complexity and image distortion when positioning the target through binocular or multi-eye vision, thus providing a digital twin environment based on deep learning and multi-eye Visual Data Acquisition Methods

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Data acquisition method based on deep learning and multi-eye vision in digital twin environment
  • Data acquisition method based on deep learning and multi-eye vision in digital twin environment
  • Data acquisition method based on deep learning and multi-eye vision in digital twin environment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0086] The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without paying creative efforts all belong to the protection scope of the present invention.

[0087] A data acquisition method based on deep learning and multi-eye vision in a digital twin environment, such as figure 1 As shown, the steps are as follows:

[0088] S1, set a spherical marker point with a large degree of discrimination from the environmental background, such as image 3 shown in .

[0089] The spherical marker point has a specific color that is highly distinguishable from the environment background.

[0090] S2. Obtain ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a data acquisition method based on deep learning and multi-eye vision in a digital twin environment. The steps are as follows: S1, setting a spherical marker point that is distinguishable from the environmental background; S2, obtaining the spherical shape of the marker point in the video image Coordinates and radius of the center position; S3, build a deep learning model and perform training; S4, attach the marker points to the target object with positioning, use step S3 to locate the marker points in space, and realize the positioning of the target object. The invention can be used to acquire the position and attitude data of various target objects in the digital twin environment, and has strong universal applicability. With the aid of landmarks, the complexity of visual image analysis and processing is reduced, making the identification and positioning process simpler, more efficient and more reliable. Use deep learning to locate the marker points, minimize the positioning error of the camera due to image distortion, and can adapt to various camera numbers and layouts.

Description

technical field [0001] The invention belongs to the technical field of digital acquisition, and in particular relates to a data acquisition method in a digital twin environment, especially a data acquisition method based on deep learning and multi-eye vision. Background technique [0002] Digital twin technology requires a high degree of simulation of physical equipment, and at the same time needs to grasp various state data of physical equipment in real time, so that the simulation model is consistent with the real-time state of physical equipment. Digital twin technology relies on perception and control technology and its comprehensive technology integration. The mechanical state information, current state information, thermodynamic state information and action state information of physical equipment need to be obtained by means of sensing technology. [0003] In the construction of the digital twin system, it is first necessary to construct the 3D model of the physical e...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/20G06K9/34G06K9/46G06N3/08
CPCG06N3/08G06V10/22G06V10/267G06V10/44
Inventor 李浩刘根王昊琪文笑雨乔东平罗国富
Owner ZHENGZHOU UNIVERSITY OF LIGHT INDUSTRY