Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

70 results about "Focus stacking" patented technology

Focus stacking (also known as focal plane merging and z-stacking or focus blending) is a digital image processing technique which combines multiple images taken at different focus distances to give a resulting image with a greater depth of field (DOF) than any of the individual source images. Focus stacking can be used in any situation where individual images have a very shallow depth of field; macro photography and optical microscopy are two typical examples. Focus stacking can also be useful in landscape photography.

System for generating a synthetic 2d image with an enhanced depth of field of a biological sample

The present invention relates to a system for generating a synthetic 2D image with an enhanced depth of field of a biological sample. It is described to acquire (110) with a microscope-scanner (20) first image data at a first lateral position of the biological sample and second image data at a second lateral position of the biological sample. The microscope-scanner is used to acquire (120) third image data at the first lateral position and fourth image data at the second lateral position, wherein the third image data is acquired at a depth that is different than that for the first image data and the fourth image data is acquired at a depth that is different than that for the second image data. First working image data is generated (130) for the first lateral position, the generation comprising processing the first image data and the third image data by a focus stacking algorithm. Second working image data is generated (140) for the second lateral position, the generation comprising processing the second image data and the fourth image data by the focus stacking algorithm. The first working image data and the second working image data are combined (150), during acquisition of image data, to generate the synthetic 2D image with an enhanced depth of field of the biological sample.
Owner:KONINKLJIJKE PHILIPS NV

High-performance multi-scale target detection method based on deep learning

PendingCN112149665ASmall amount of calculationBreak through the bottleneck that cannot be put into practical applicationImage enhancementImage analysisRegion selectionEngineering
The invention discloses a high-performance multi-scale target detection method based on deep learning, and the method comprises a training process and a detection process, and the training process comprises the following steps: 1.1, inputting a picture, and generating an image block; 1.2, screening positive image blocks; 1.3, screening negative image blocks; 1.4, inputting image blocks, and training a model; the detection process is as follows: 2.1, predicting a focus pixel set; 2.2, generating a focus image block; 2.3, a RoI stage; 2.4, carrying out classification and regression; 2.5, carrying out focus synthesis. According to the method, a brand new candidate region selection method is provided for the training process, meanwhile, a shallow-to-deep method is adopted for the detection process, regions which cannot possibly contain targets are ignored, and compared with a conventional detection algorithm for processing the whole image pyramid, the calculation amount of the multi-scaledetection method is remarkably reduced, and the detection accuracy is improved. The detection rate is greatly improved, and the bottleneck that the conventional multi-scale detection algorithm cannotbe put into practical application is broken through.
Owner:ZHEJIANG UNIV OF TECH

Light field image feature point detection method based on multi-scale Harris

ActiveCN110490924AOvercome the shortcomings of occlusion, loss of depth, etc.Comprehensive descriptionImage enhancementImage analysisAngular pointImage resolution
The invention discloses a light field image feature detection method based on multi-scale Harris. The method specifically comprises the steps: reading a light field original image parameter file intoMATLAB, and decoding and processing the light field original image parameter file into an effective four-dimensional light field matrix; taking the maximum value in the angle resolution [u, v] as thelength n of a slope list to obtain the slope list, and refocusing each slope in the slope list to obtain a focus stack image corresponding to the slope; carrying out multi-scale Harris corner detection on each focus stack image; carrying out non-maximum suppression on the corner detected in each scale of the current corner stack, and reducing the influence of multiple responses; carrying out multi-scale judgment on the candidate corner points; if the candidate corner points appear in multiple scales, reserving the candidate corner points; otherwise, deleting, wherein the finally reserved angular points are the feature points of the light field image; obtaining the real information of the whole space by adopting the position information and the angle information in the light field image. Therefore, the defects of shielding, depth loss and the like of the traditional imaging are overcome, and the scene description is more comprehensive.
Owner:XIAN UNIV OF TECH

Monocular visual focusing stack acquisition and scene reconstruction method

The invention discloses a monocular visual focusing stack acquisition and scene reconstruction method. The method comprises the following steps of: controlling rotation of a prime lens through an electric control rotator, and acquiring focusing stack data; in the rotation process of the prime lens, fixing a detector and synchronously translating the prime lens along an optical axis of a camera; establishing a corresponding relationship between a rotation angle of the electric control rotator and an imaging surface depth according to position adjustment of the prime lens; establishing a corresponding relationship between the rotation angle of the electric control rotator and a focusing object surface depth by combining an object image relationship of lens imaging according to the corresponding relationship between the rotation angle of the electric control rotator and the focusing object surface depth; and functionally calculating a depth of each object point by utilizing maximized focusing measurement according to the corresponding relationship between the rotation angle of the electric control rotator and the focusing object surface depth, and outputting a scene depth map and a full focusing map so as to reconstruct a three-dimensional scene. According to the method, the requirements for three-dimensional scene reconstruction, image depth information and full focusing under field of view (FOV) of the cameras can be satisfied, and depth images ad full focusing images can be generated.
Owner:BEIJING INFORMATION SCI & TECH UNIV

Filtered back-projection method and apparatus for reconstructing light field by focus stack

The invention discloses a filtered back-projection method and apparatus for reconstructing a light field by a focus stack. The method mainly comprises the steps of giving out a geometric relationship of the four-dimensional light field and the focus stack, building a projection model for forming the focus stack by the light field, and forming a projection operator; based on the projection model, establishing a frequency domain relationship of the four-dimensional light field and the focus stack, and forming a Fourier slice relationship; based on the Fourier slice relationship, establishing filtered back-projection and convolution back-projection methods for reconstructing the light field by the focus stack; and selecting optimized filtering function and convolution function to reconstruct the light field. The focus stack is an image sequence collected by a relative motion of a detector and a lens, and by selecting the optimized filtering function and convolution function, the high-precision four-dimensional light field can be reconstructed. The four-dimensional light field can realize three-dimensional reconstruction under a shooting view angle of a camera, and can provide accurate three-dimensional structure information for virtual reality and geometric measurement.
Owner:BEIJING INFORMATION SCI & TECH UNIV

Focusing stack imaging system presetting position calibration method based on focusing measurement

The invention discloses a focusing stack imaging system presetting position calibration method based on focusing measurement. The focusing stack imaging system presetting position calibration method comprises the following steps: 1, setting a presetting position calibration environment; 2, collecting images of a calibration plate in a presetting position calibration environment at a presetting position and a corresponding II-type identification line; 3, calculating the focusing measure of each collected image, and obtaining a presetting position corresponding to the maximum focusing measure through data fitting; 4, verifying the repeatability and calibration accuracy of the mean value and variance presetting positions of multiple calibration results by adopting a method of calibrating thesame presetting position for multiple times; and 5, acquiring focusing stack data of an actual scene by using the calibrated presetting position, and reconstructing a scene depth map and a full-focusing map. The focusing stack imaging system presetting position calibration method can improve the focusing stack data collection efficiency, achieves the high-precision reconstruction of the depth of athree-dimensional scene, and also can provide reference and theoretical basis for the building of a three-dimensional digital space and the improvement of a calculation method.
Owner:BEIJING INFORMATION SCI & TECH UNIV +1

Light field semantic segmentation method and system, electronic terminal and storage medium

ActiveCN111382753ADeletion no longer restrictedEffectively identify occlusionCharacter and pattern recognitionMedicineSuperpixel segmentation
The invention provides a light field semantic segmentation method and system, an electronic terminal and a storage medium. The method comprises the steps that: a reference view angle is selected froma camera plane so as to perform light field sampling; the superpixel set of the reference view angle is obtained based on a superpixel segmentation algorithm, and reprojection is performed on the superpixel set of the reference view angle to obtain superpixel sets of other view angles corresponding to the reference view angle; focal length fusion is carried out on a plurality of images with different re-focusing depths in a focusing stack, and the superpixel sets to which pixels belong are voted; semantic analysis is performed on the images in the focusing stack based on a neural network algorithm to obtain the semantic classification of each superpixel set; and the semantic classifications of all the superpixel sets in the focusing stack are summarized, and voting is performed, so that aunique semantic category number corresponding to each superpixel set is determined. The method is no longer limited by depth information loss caused by projection transformation, and effectively recognizes occlusion so as to perform correct category prediction on the pixel points of an occluded object.
Owner:YAOKE INTELLIGENT TECH SHANGHAI CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products