Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Construction method of multi-vision task collaborative depth estimation model

A technology for depth estimation and construction methods, applied in computing, image data processing, instruments, etc., can solve problems such as difficulty in using parameters and affecting the generalization ability of model scenes, and achieve the effect of improving the accuracy of depth estimation

Active Publication Date: 2021-04-09
HUBEI UNIV OF TECH
View PDF12 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Learning-based methods can combine local context and prior knowledge to improve the accuracy of depth estimation in ill-posed areas, but such methods have a strong dependence on the dataset, which affects the scene generalization ability of the model, and more parameters are very difficult. Difficult to use on energy or memory constrained ETA devices

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Construction method of multi-vision task collaborative depth estimation model
  • Construction method of multi-vision task collaborative depth estimation model
  • Construction method of multi-vision task collaborative depth estimation model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0064] The experiment was verified on the Kitti dataset and compared with several classic depth acquisition algorithms. The experimental results are shown in Table 1. In terms of depth map indicators, the present invention has achieved the lowest error rate in both the global and occluded areas. The depth information of scene details has a better effect, such as figure 2 shown. At the same time, the present invention performs algorithm verification for different road conditions. As shown in FIG. 3 , better depth estimation effects can be obtained under four different road conditions.

[0065] Table 1 Experimental comparison on the Kitti dataset

[0066]

[0067]

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a construction method of a multi-vision task collaborative depth estimation model. The construction method comprises the following specific steps: constructing a rapid scene depth estimation model under stereoscopic vision constraint; optimizing a parallax geometry and knowledge priori collaborative model; performing target depth refinement combined with semantic features: constructing a module semantic segmentation module which is optimized stage by stage from roughness to fineness and is similar to depth estimation, forming a symmetric structure shared by feature layers, and obtaining a disparity map integrated with semantic geometrical information through a disparity obtaining network by utilizing different network features in the same stage; and the purpose of obstacle target refinement is further achieved. Multi-scale knowledge prior and visual semantics are embedded into the depth estimation model, the nature of human perception is deeply approached through a multi-task collaborative sharing learning mode, and the depth estimation precision of the obstacle is improved.

Description

technical field [0001] The invention relates to the technical field of electronic walking assistance equipment, in particular to a method for constructing a depth estimation model of multi-visual task cooperation in the electronic walking assistance equipment. Background technique [0002] According to the latest statistics from the World Health Organization, there are approximately 285 million visually impaired people worldwide, and only China has 20 million low-vision and blind people. Daily travel is the biggest problem faced by visually impaired people in their daily lives. Today, with the rapid development of technology and the Internet, they are more eager than ordinary people to enjoy the convenience brought by artificial intelligence. Therefore, how to benefit the visually impaired and extend their vision to perceive the surrounding environment is an important research topic. Traditional blind-guiding assistance technologies and tools have relatively large limitatio...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/50G06T5/50
CPCG06T5/50G06T2207/20081G06T2207/20084G06T2207/20221G06T7/50
Inventor 李婕周顺巩朋成石文轩张正文
Owner HUBEI UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products