Three-dimensional scene fusion method and device based on monocular estimation

A technology of 3D scene and fusion method, which is applied in the field of 3D scene fusion method and device based on monocular estimation, and can solve the problems of unsatisfactory fusion methods and unsatisfactory implementation of monitoring objects and static 3D scene models

Pending Publication Date: 2020-06-26
ZHEJIANG DAHUA TECH CO LTD
View PDF5 Cites 19 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0008] The embodiment of the present invention provides a 3D scene fusion method and device based on monocular estimation, to at least solve the problem in the related art th

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Three-dimensional scene fusion method and device based on monocular estimation
  • Three-dimensional scene fusion method and device based on monocular estimation
  • Three-dimensional scene fusion method and device based on monocular estimation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0072] Hereinafter, the present invention will be described in detail with reference to the drawings and examples. It should be noted that, in the case of no conflict, the embodiments in the present application and the features in the embodiments can be combined with each other.

[0073] It should be noted that the terms "first" and "second" in the description and claims of the present invention and the above drawings are used to distinguish similar objects, but not necessarily used to describe a specific sequence or sequence.

[0074] An embodiment of the present invention provides a 3D scene fusion method based on monocular estimation. figure 1 is a schematic diagram of the hardware environment of an optional monocular estimation-based 3D scene fusion method according to an embodiment of the present invention, as shown in figure 1 As shown, the hardware environment may include, but not limited to, an image acquisition device 102, a server 104, and a display device 106. Opti...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The embodiment of the invention provides a three-dimensional scene fusion method and device based on monocular estimation. The method comprises the steps: inputting an obtained first image into a target monocular depth estimation network, and obtaining a target depth map, the target monocular depth estimation network being obtained through the training of an initial monocular depth estimation network; obtaining depth information of a target object in the target depth map according to the target depth map and the target semantic segmentation map; according to the depth information of the targetobject and the parameter information of the equipment for collecting the first image, the position information of the target object in a preset static three-dimensional scene is obtained, and a mapping relation exists between a coordinate system of the static three-dimensional scene and a world coordinate system where the target object is located. The problem that in the prior art, due to the fact that a target depth estimation method is not ideal in implementation, the fusion mode of a monitoring object and a static three-dimensional scene model is not ideal is solved.

Description

technical field [0001] The present invention relates to the technical field of computer vision, in particular to a method and device for 3D scene fusion based on monocular estimation. Background technique [0002] Assuming that a static 3D scene model constructed from a real scene is known, we can monitor the corresponding moving objects in the real scene, such as people, cars, etc., through the camera in real time, and fuse these objects with the static 3D scene model to provide a A more intuitive and technological monitoring screen. Among them, the depth estimation of the target is a key issue. If the depth information from the target to the camera can be effectively determined, the position of the target in the 3D scene can be determined. Common depth estimation methods are: [0003] 1. Binocular distance measurement. The binocular distance measurement method needs to determine the target distance according to the baseline of the binocular camera. The range is limited b...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/50G06T7/80G06T3/40G06T5/00G06T7/10G06N3/04G06N3/08
CPCG06T7/50G06T7/10G06T7/80G06T5/006G06T3/4007G06N3/08G06T2207/20081G06T2207/20084G06T2207/20221G06N3/045
Inventor 刘逸颖王晓鲁李乾坤卢维
Owner ZHEJIANG DAHUA TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products