Multi-scene visual robot simulation platform and method

A technology of simulation platform and simulation method, which is applied in the direction of instruments, simulators, measuring devices, etc., can solve the problems of image sensor field of view landmark point deterioration, image sensor noise, and positioning effect, etc., to achieve cost reduction and solve complexity , to achieve a simple effect

Active Publication Date: 2019-09-20
远形时空科技(北京)有限公司
13 Cites 2 Cited by

AI-Extracted Technical Summary

Problems solved by technology

At this time, Slam will not be able to obtain enough and good enough landmark points to complete the autonomous positioning
For another example, the image sensor of the robot captures a good landmark point in the first posture of the robot, but due to the needs of the operation, the robot changes to the second posture in a short time (for example, the direction of travel changes), at this time the landmark point ...
View more

Abstract

The invention discloses a multi-scene visual robot simulation platform and method and belongs to the technical field of robots and computers. According to the multi-scene visual robot simulation platform and method, a data acquisition device is utilized to acquire actual robot positioning information and sensor information which is required by robot simulation; a positioning and tracking device and a sensor acquisition device are arranged on a real robot; the real track information of a sweeper and other sensor information are acquired; the information acquired by the sensor acquisition device is recorded into an AR image simulation device; and the AR image simulation device is composed of a microcomputer and real-time simulation software, wherein the microcomputer is provided with an image processor. With the multi-scene visual robot simulation platform and method of the method adopted, technical challenges brought to the upgrading and iteration of an SLAM algorithm due to the complexity of indoor and outdoor scenes can be overcome. The multi-scene visual robot simulation platform and method have the advantages of simplicity in implementation and high repeatability.

Application Domain

Measurement devicesSimulator control

Technology Topic

Real-time simulationSoftware +7

Image

  • Multi-scene visual robot simulation platform and method
  • Multi-scene visual robot simulation platform and method
  • Multi-scene visual robot simulation platform and method

Examples

  • Experimental program(4)

Example Embodiment

[0041] Example 1: Such as figure 2 , image 3 , Figure 4 and Figure 5 As shown, a multi-scenario visual robot simulation platform includes:
[0042] Data collection device: Collect the actual information of the robot and transfer it to the AR image simulation device.
[0043] SLAM calculation module: It contains calculation processing unit, image processing unit and communication unit.
[0044] AR image simulation device: generate different scene information according to demand.
[0045] The AR image simulation device will construct the image scene after acquiring the data. The design and construction of the image scene uses the three-dimensional software 3Dmax to complete the modeling and mapping. After the construction of the basic materials is completed, the image simulation device can perform freely based on these materials Drag and splice, you can actually open the door and close the door, you can switch the lights on the wall or move the room furniture, etc.
[0046] The construction plan is to use Unity3D. Unity3D is a comprehensive multi-platform game development tool developed by Unity Technologies that allows players to easily create interactive content such as 3D video games, architectural visualization, and real-time 3D animation. It is a fully integrated game development tool. Professional engine.
[0047] Unity is similar to Director, Blendergameengine, Virtools or TorqueGameBuilder and other software that uses interactive graphical development environment as the primary method.
[0048] System objects can be divided into three categories: space objects such as lights, walls, floors, and pillars, decoration objects such as doors, windows, and decorations, and scene objects such as two-dimensional pictures, video, or audio information.
[0049] The division of objects multiplied by this is conducive to the management and drawing of different scenes. The system objects are all displayed in the three scenes by the object management module.
[0050] Such as figure 1 As shown, the AR image simulation process of the visual robot simulation platform is mainly divided into three parts: data collection, system environment capture and collection of data, and finally Unity3d rendering.
[0051] After the environment is built, the scene database can be used in the basic environment to build scenes at different times.
[0052] figure 2 The core principles of the AR image simulation device in figure 1 As shown, the simulation environment constructed in this way.

Example Embodiment

[0053] Example 2: Such as figure 2 As shown, a multi-scene visual robot simulation platform whose software architecture is:
[0054] Data collection includes two parts: sensor collection device and location tracking device.
[0055] The Slam calculation module receives the simulation information generated by the AR image simulation device.
[0056] Compare and optimize the overall test effect with the actual acquisition of sweeping motion information.
[0057] Among them, the scene image information relative to the posture change parameters of the autonomous walking robot frame is transferred to the Slam calculation module from the AR image simulation device.
[0058] figure 2 A schematic diagram of the process flow of the method of the present invention is given in, wherein the robot simulation information is provided by the sensor acquisition device in the data acquisition device to provide the robot sensor information and the AR image simulation device.
[0059] The data acquisition device obtains the actual information, and the data device needs to be fixed to the robot to collect and save the information in the real-time movement of the robot in real time.
[0060] The sensor collection device can add multiple modules to collect according to the different collected information. The positioning tracking device provides real-time robot pose information. This can be compared with the SLAM calculation module for result analysis after the final SLAM module calculation is sent.
[0061] The AR image simulation device is mainly used to generate image scenes, so that the visual computing module can be tested in a variety of scenes during calculation.
[0062] The SLAM calculation module completes simultaneous positioning and map construction based on image data.
[0063] The above modules are only functionally divided modules. In specific implementation, the data acquisition device module can also be directly transferred to the AR image simulation device through network transmission without being placed on the robot, and then combined with the AR image simulation device module.

Example Embodiment

[0064] Example 3: Such as figure 2 , image 3 , Figure 4 and Figure 5 As shown, a multi-scene visual robot simulation method includes the following steps:
[0065] The sweeper robot will be tested in different time periods and scenarios for the same family during home testing.
[0066] image 3 The test was conducted in an environment where the sunlight is relatively sufficient and the various perspectives and scenes are relatively clear. It can be seen that the sweeper can be positioned very well in such an environment. After comparing with the actual positioning, we It is found that there is no major deviation, and the overall positioning effect is very close to the pose obtained by the positioning tracking device. The results obtained by SLAM in such a test scenario are ideal.
[0067] Figure 4 It is a scene test done in the scene after the evening sun is about to set. In such a scene, it is not possible to distinguish the corner details at a part far away from the window. The actual test effect is also the process of testing in such an area. The SLAM positioning has a relatively large deviation, but the positioning effect near the window area is ideal. Compared with the actual positioning tracking device, it is image 3 There is not much difference in testing under the environment.
[0068] There is another situation such as Figure 4 As shown, this is a test conducted in a dark environment where the sun is setting. There is basically no light in the room in the scene. The test results are basically not ideal when the lights are not turned on. Compared with the actual effect, many places have been produced. The overall effect is poor.
[0069] The final simulation scene is the situation where there is no sunlight at night and the sweeper performs the cleaning environment simulation when the lights are turned on, such as Figure 5 As shown, the overall situation is closely related to the light source, but some areas that rely on the light are overexposed, and there are some relatively large deviations in the positioning process of the sweeper.
[0070] In this example, through the simulation of the sweeper scene at different times, the problems of the SLAM algorithm can be found relatively quickly, and the problems can be solved in a targeted manner. Compared with the manual scene test, basically a scene simulation test may waste a whole day, which is very unbearable for the iteration speed of the algorithm.
[0071] Through this multi-scenario visual robot simulation platform, it is possible to get rid of the limitations of the physical platform, to be able to simulate multiple times, and to see the results in real time, with strong real-time performance.
[0072] In addition to testing the simulation effects of the same room at different moments, you can also test indoor test scenarios in different rooms under the same lighting conditions.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.

Similar technology patents

Novel high-performance culture bottle suitable for suspension and adherent cell mixed culture

PendingCN111548938Asolve complexityImprove mass transfer efficiency
Owner:INNOVEL INTELLIGENT TECH CO LTD

Video conferencing system

InactiveUS20050151836A1increase resolutioneasy to implement
Owner:CAMELOT TECH ASSOCS

Motor fault detection method based on group type sparse self-coding and swarm intelligence

PendingCN114839531Aimprove accuracysolve complexity
Owner:HUAIYIN INSTITUTE OF TECHNOLOGY

Classification and recommendation of technical efficacy words

  • Easy to implement
  • solve complexity

Intelligent video monitoring method and system thereof

InactiveCN101635835AEasy to implementEasy system expansion
Owner:SHENZHEN XINYI TECH CO LTD

Voice call routing by dynamic personal profile

InactiveUS7653191B1less laboreasy to implement
Owner:MICROSOFT TECH LICENSING LLC

Novel high-performance culture bottle suitable for suspension and adherent cell mixed culture

PendingCN111548938Asolve complexityImprove mass transfer efficiency
Owner:INNOVEL INTELLIGENT TECH CO LTD

Motor fault detection method based on group type sparse self-coding and swarm intelligence

PendingCN114839531Aimprove accuracysolve complexity
Owner:HUAIYIN INSTITUTE OF TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products