Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Monocular multi-modal depth map generation method, system, device and storage medium

A depth map, multi-modal technology, applied in the field of image processing, can solve the problem that the depth map acquisition method cannot meet the all-weather multi-scene, etc., and achieve the effect of rich feature expression

Active Publication Date: 2022-02-11
BEIJING SHENRUI BOLIAN TECH CO LTD +1
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In view of the above problems, the embodiment of the present invention is based on a monocular multi-modal depth map generation method, system, device, and storage medium to solve the technical problem that the existing depth map acquisition methods cannot meet the requirements of all-weather multi-scenario

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Monocular multi-modal depth map generation method, system, device and storage medium
  • Monocular multi-modal depth map generation method, system, device and storage medium
  • Monocular multi-modal depth map generation method, system, device and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0070] In order to make the objectives, technical solutions and advantages of the present invention clearer and more comprehensible, the present invention will be further described below with reference to the accompanying drawings and specific embodiments. Obviously, the described embodiments are only some, but not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

[0071] Based on the shortcomings of the prior art, the embodiments of the present invention provide a specific implementation of a method for generating a monocular multimodal depth map, such as figure 1 and figure 2 As shown, the method specifically includes:

[0072] S110: Create a dual-branch perceptual neural network, input the infrared image and the visible light image into the dual-branch perceptual neural ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a method, system, device and storage medium for generating a depth map based on monocular multimodality, which belongs to the technical field of image processing and solves the technical problem that the existing depth map acquisition methods cannot meet the requirements of all-weather multi-scenes. The method includes: creating a dual-branch sensory neural network, inputting infrared images and visible light images into the dual-branch sensory neural network to generate infrared feature maps and visible light feature maps; The feature map and the visible light feature map are mutually perceptually fused across modalities to obtain a feature fusion map; the feature fusion map is up-sampled by a dual sensory neural network to generate a brand new depth map. Based on the image data of the two modalities of infrared images and visible light images, cross-modal fusion is performed at the feature level, and finally a brand new depth map that combines the advantages of visible light images and infrared images is generated, which can be obtained in all-weather and multi-scene conditions. .

Description

technical field [0001] The present invention relates to the technical field of image processing, in particular to a method, system, device and storage medium for generating a monocular multimodal depth map. Background technique [0002] The depth map is also known as the distance map, and its pixel values ​​represent the distance from the image collector to each point in the scene. This depth information helps to understand the geometric relationship between the object and the environment. In augmented reality, in focusing, object detection It plays an important role in assisting the blind to perceive the environment. The depth map can be obtained by the depth camera. The imaging methods of the existing depth camera can be roughly divided into three types, namely, based on structured light, ToF (Time of flight) and pure binocular. However, none of these three imaging methods can meet the depth map acquisition of all-weather and multi-scene. In the prior art, single modalit...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V20/00G06V10/40G06V10/80G06V10/82G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V20/00G06V10/40G06N3/045G06F18/253
Inventor 廉洁张树俞益洲李一鸣乔昕
Owner BEIJING SHENRUI BOLIAN TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products