Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

699 results about "Camera array" patented technology

Automatic video system using multiple cameras

A camera array captures plural component images which are combined into a single scene from which “panning” and “zooming” within the scene are performed. In one embodiment, each camera of the array is a fixed digital camera. The images from each camera are warped and blended such that the combined image is seamless with respect to each of the component images. Warping of the digital images is performed via pre-calculated non-dynamic equations that are calculated based on a registration of the camera array. The process of registering each camera in the arrays is performed either manually, by selecting corresponding points or sets of points in two or more images, or automatically, by presenting a source object (laser light source, for example) into a scene being captured by the camera array and registering positions of the source object as it appears in each of the images. The warping equations are calculated based on the registration data and each scene captured by the camera array is warped and combined using the same equations determined therefrom. A scene captured by the camera array is zoomed, or selectively steered to an area of interest. This zooming- or steering, being done in the digital domain is performed nearly instantaneously when compared to cameras with mechanical zoom and steering functions.
Owner:FUJIFILM BUSINESS INNOVATION CORP

Systems and Methods for Estimating Depth from Projected Texture using Camera Arrays

Systems and methods in accordance with embodiments of the invention estimate depth from projected texture using camera arrays. One embodiment of the invention includes: at least one two-dimensional array of cameras comprising a plurality of cameras; an illumination system configured to illuminate a scene with a projected texture; a processor; and memory containing an image processing pipeline application and an illumination system controller application. In addition, the illumination system controller application directs the processor to control the illumination system to illuminate a scene with a projected texture. Furthermore, the image processing pipeline application directs the processor to: utilize the illumination system controller application to control the illumination system to illuminate a scene with a projected texture capture a set of images of the scene illuminated with the projected texture; determining depth estimates for pixel locations in an image from a reference viewpoint using at least a subset of the set of images. Also, generating a depth estimate for a given pixel location in the image from the reference viewpoint includes: identifying pixels in the at least a subset of the set of images that correspond to the given pixel location in the image from the reference viewpoint based upon expected disparity at a plurality of depths along a plurality of epipolar lines aligned at different angles; comparing the similarity of the corresponding pixels identified at each of the plurality of depths; and selecting the depth from the plurality of depths at which the identified corresponding pixels have the highest degree of similarity as a depth estimate for the given pixel location in the image from the reference viewpoint.
Owner:FOTONATION LTD

Uncalibrated multi-viewpoint image correction method for parallel camera array

InactiveCN102065313AFreely adjust horizontal parallaxIncrease the use range of multi-look correctionImage analysisSteroscopic systemsParallaxScale-invariant feature transform
The invention relates to an uncalibrated multi-viewpoint image correction method for parallel camera array. The method comprises the steps of: at first, extracting a set of characteristic points in viewpoint images and determining matching point pairs of every two adjacent images; then introducing RANSAC (Random Sample Consensus) algorithm to enhance the matching precision of SIFT (Scale Invariant Feature Transform) characteristic points, and providing a blocking characteristic extraction method to take the fined positional information of the characteristic points as the input in the subsequent correction processes so as to calculate a correction matrix of uncalibrated stereoscopic image pairs; then projecting a plurality of non-coplanar correction planes onto the same common correction plane and calculating the horizontal distance between the adjacent viewpoints on the common correction plane; and finally, adjusting the positions of the viewpoints horizontally until parallaxes are uniform, namely completing the correction. The composite stereoscopic image after the multi-viewpoint uncalibrated correction of the invention has quite strong sense of width and breadth, prominently enhanced stereoscopic effect compared with the image before the correction, and can be applied to front-end signal processing of a great many of 3DTV application devices.
Owner:SHANGHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products