Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Optimized depth extraction and passive ranging based on monocular vision

A deep extraction and passive ranging technology, applied in image data processing, instruments, calculations, etc., can solve the problems of poor versatility, non-linear distortion correction of the image to be tested, and low accuracy of target measurement and ranging. To achieve the effect of avoiding errors

Active Publication Date: 2019-01-04
ZHEJIANG FORESTRY UNIVERSITY
View PDF4 Cites 40 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Due to the differences in the internal parameters of different camera devices, for different types of camera devices, this method needs to re-acquire target image information and establish a camera depth information extraction model, and different vehicle-mounted cameras have different camera pitch angles due to lens manufacturing and assembly. There will be differences, so the method in [21] is less general
[0006] In addition, the method of literature [21] uses a vertical target to study the relationship between the imaging angle of the vertical plane image point and the pixel value of the vertical coordinate, and applies this to the measurement of the object distance on the horizontal plane, which makes the ranging accuracy relatively low , because the camera’s horizontal and vertical distortion laws are not exactly the same
The invention application with the application number 201710849961.3 discloses an improved camera calibration model and distortion correction model suitable for smart mobile cameras (hereinafter referred to as: improved calibration model with nonlinear distortion items), which can help correct Calibrate the image of the board to obtain higher-precision camera internal and external parameters. The disadvantage is that this method has not been extended to the nonlinear distortion correction of the image to be tested and the measurement of the target object.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Optimized depth extraction and passive ranging based on monocular vision
  • Optimized depth extraction and passive ranging based on monocular vision
  • Optimized depth extraction and passive ranging based on monocular vision

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0158] Taking Xiaomi 3 (MI 3) mobile phone as an example, the optimized depth extraction and passive ranging method based on monocular vision of the present invention will be described in detail below.

[0159] 1. Calibrate the mobile phone camera to obtain the internal parameters of the camera and image resolution

[0160] Use a checkerboard calibration board with a number of rows and columns of 8*9 and a size of 20*20 as the experimental material for camera calibration, collect 20 calibration board pictures from different angles through the camera of the Mi 3 mobile phone, and use OpenCV to improve the non-linear The camera calibration model of the distortion item is used to calibrate the Xiaomi 3 (MI 3) mobile phone camera,

[0161] First use the fin() function to read the calibration board picture, and obtain the image resolution of the first picture through .cols and .rows; then use the find4QuadCornerSubpix() function to extract the sub-pixel corners in the calibration b...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an optimized depth extraction and passive ranging method based on monocular vision. The method is characterized by comprising steps: 1 calibrating a phone camera and obtaininginternal parameters of the camera and the image resolution; 2, establishing a depth extraction model; 3, acquiring pixel value u, v of a target point through image acquisition of that target to be measured; 4, utilizing the camera internal parameters and the target point pixel value obtained in the step 1 and combining the camera depth extraction model, calculating a distance L between an arbitrary point on an object image to be measured and a mobile phone camera, and extracting and passively optimizing a depth based on monocular vision of the present invention. The ranging method can be applied to cameras with different angle of view, focal length, image resolution and other parameters to improve the ranging accuracy and provide support for object measurement and 3D reconstruction of realscene in machine vision.

Description

technical field [0001] The invention relates to the field of ground close-range photogrammetry, in particular to a passive ranging method for a pinhole camera under a monocular vision system. Background technique [0002] Image-based target ranging and positioning, mainly divided into two methods: active ranging and passive ranging [1] . Active ranging is to install a laser ranging device on a machine (such as a camera) for ranging [2-4] . Passive ranging is to calculate the depth information of the target object in the two-dimensional digital image through machine vision, and then calculate the target object distance according to the image pixel information and camera imaging principle [5-6] . Machine vision distance measurement is mainly divided into two types: monocular vision distance measurement and binocular vision distance measurement [7-9] . In the ranging process, the key step is to obtain the depth information of the target object. The early methods of obtain...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/80G06T7/50G06T7/13G06T5/00
CPCG06T7/13G06T7/50G06T7/80G06T5/80
Inventor 徐爱俊武新梅周素茵
Owner ZHEJIANG FORESTRY UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products