Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Non-contact automatic mapping method based on deep learning

A non-contact, deep learning technology, applied in the field of deep learning, can solve the problems of time-consuming, unaligned, low quality, etc., to achieve the effect of reducing manual steps and speeding up the texture speed

Pending Publication Date: 2020-10-13
TIANJIN UNIV
View PDF0 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The most time-consuming step in the process of manual mapping is to accurately align the required parts of the captured image with the 3D model. The problem in this process is that the professional software and the camera parameters used to capture the image are not the same, so The image is not distorted to the same degree as the viewing angle in the professional software, resulting in misalignment
Therefore, the disadvantage of manual mapping is that it takes a long time and the quality is low
[0004] The method of automatic mapping can better solve the above problems. The important step in the process of automatic mapping is to obtain the internal and external parameters of the camera, but the existing automatic mapping method is mainly to manually calibrate the feature points of the model and the screen coordinates in the corresponding texture map. Then the external parameters of the camera are obtained through the coordinate pair. When manually calibrating the model, many methods add sharp objects to the model as feature points in the scanning process for the convenience of calibration. However, this method is not suitable for some industries such as the cultural relics protection industry. Applicable because these industries require non-contact operation during the scanning process, so adding sharp objects is not feasible
And it is not suitable for industries that require the quality of texture maps, because a model may have dozens of texture maps, and manually selecting coordinate pairs for each picture is still a very time-consuming task.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Non-contact automatic mapping method based on deep learning
  • Non-contact automatic mapping method based on deep learning
  • Non-contact automatic mapping method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] The present invention will be further described below in conjunction with the accompanying drawings and specific embodiments, but the following embodiments in no way limit the present invention.

[0041] The present invention proposes a non-contact automatic mapping method based on deep learning, and the design idea is to use the deep learning method to replace the traditional method of using calibration point pairs to obtain camera parameters. The basic steps of the method implementation are: figure 1 As shown, first prepare the object to be collected, the camera used for collection and the 3D model of the object, and then pass the Zhang Zhengyou calibration method [1] Calibrate the camera. During the calibration process, the color test card is used as the calibration board. After the calibration, the internal parameters of the camera are obtained. However, since the focal length needs to be changed during the process of collecting each image, the focal length paramete...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a non-contact automatic mapping method based on deep learning. The method comprises the following steps: calibrating a camera by using a color test card through a Zhang Zhengyou calibration method to obtain camera internal parameters; collecting a plurality of original images of the object and recording focal length parameters; correcting the color of the original image byusing the color test card picture to obtain an image A; and establishing and training a camera external parameter estimation network; inputting the image A, the camera internal parameters and the object three-dimensional model into a trained camera external parameter estimation network to obtain camera external parameters; inputting the external and internal parameters of the camera, the three-dimensional model of the object and the image A into a rendering pipeline for processing to obtain a rendering result, and performing rendering alignment; and enabling the rendering pipeline to store thecolor value of the image of the area determined to be used in the image A in a texture mapping file according to the UV coordinates. According to the invention, the automatic mapping process is realized, a large number of manual steps are reduced, parameters transmitted by a user can be received for fine adjustment, and the high-quality mapping speed is accelerated.

Description

technical field [0001] The present invention mainly relates to a mapping method and process for a three-dimensional model in image processing and computer graphics, as well as a method for estimating internal and external parameters of a camera, and relates to a deep learning method in the field of artificial intelligence. Background technique [0002] With the rapid development of computer technology, the digital needs of all walks of life are also increasing day by day. For some industries, the digitization process is to digitize real-world objects into computers using computer graphics technology and store and display them, and the mapping process is an important part of digitization. The process of mapping means that after certain processing of the existing original texture image, the color of the texture image is mapped to the coordinates of the 3D model, so that the rendering of the 3D model is closer to the appearance of the real world. [0003] The existing texture ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/80G06T15/04G06N3/04
CPCG06T7/85G06T15/04G06N3/045Y02T10/40
Inventor 张梁昊张加万
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products