Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-lens sensor-based image fusion method

An image fusion and multi-lens technology, applied in image enhancement, image analysis, image data processing, etc., can solve the problems of reducing the work efficiency of line inspection operations

Active Publication Date: 2017-12-19
NINGBO POWER SUPPLY COMPANY STATE GRID ZHEJIANG ELECTRIC POWER +1
View PDF8 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, at present, most of the UAV systems are equipped with a single ordinary camera. When using UAVs to take pictures along the grid line towers, only one photo at a single moment can be obtained.
[0003] In order to obtain the image data of different angles and details of the tower, it is necessary to use the UAV to take multiple shots at different heights. times, which reduces the work efficiency of line patrol operations

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-lens sensor-based image fusion method
  • Multi-lens sensor-based image fusion method
  • Multi-lens sensor-based image fusion method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0036] The present invention provides an image fusion method based on a multi-lens sensor. The multi-lens sensor includes at least four tilt sensors whose shooting central axis is tilted and fixed on the shooting plane at the same angle. Vertical sensors that capture flat surfaces such as figure 1 As shown, the image fusion method includes:

[0037] 11. Shoot the target object based on the multi-lens sensor, and obtain multiple images with the same number as the multi-lens sensor;

[0038] 12. According to the relative positional relationship of the multi-lens sensor, construct the coordinate parameters used to convert the image information in multiple images to spatial information, and complete the transformation of the image points on the target object in multiple images from plane space to three-dimensional according to the coordinate parameters Transform the space coordinates to obtain the transformed image;

[0039] 13. Extract the feature points of the same name of mu...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a multi-lens sensor-based image fusion method, and belongs to the field of image synthesis. The method includes: acquiring multiple images on the basis of a multi-lens sensor; constructing coordinate parameters according to a relative position relationship of the multi-lens sensor, and completing coordinate transformation, which is of image points on a target object in the multiple images, from plane space and to three-dimensional space, according to the coordinate parameters to obtain transformed images; extracting contour information of the target object according to classical collinearity equations, orientation elements in the multi-lens sensor and attribute values of feature points; and splicing the multiple images according to the contour information to obtain a spliced image. Through above processing, the five images can be spliced into the one image, and relative space relationships between different regions in the target object can be obtained on the basis of the obtained image to establish a strict spatial mathematical model of a multi-lens camera. On the basis of the technology, a user can carry out fast unmanned-aerial-vehicle line inspection of transmission lines, fuse the images collected by multiple lenses, and establish a real geographical environment scene.

Description

technical field [0001] The invention belongs to the field of image synthesis, in particular to an image fusion method based on a multi-lens sensor. Background technique [0002] At present, due to the high-altitude, long-distance, fast and self-operation capabilities of the UAV system, it is widely used in the rapid inspection of power grid lines. However, most of the current UAV systems are equipped with a single ordinary camera. When UAVs are used to take pictures along the grid line towers, only one photo at a single moment can be obtained. [0003] In order to obtain the image data of different angles and details of the tower, it is necessary to use the UAV to take multiple shots at different heights. times, thus reducing the work efficiency of line patrol operations. Contents of the invention [0004] In order to solve the shortcomings and deficiencies in the prior art, the present invention provides an image fusion method for synthesizing a plurality of images so a...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T3/40G06T5/50
CPCG06T3/4038G06T5/50G06T2207/20221
Inventor 顾天雄黄晓明曹炯江炯汪从敏何玉涛张平程国开张建黎天翔
Owner NINGBO POWER SUPPLY COMPANY STATE GRID ZHEJIANG ELECTRIC POWER
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products