Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

185 results about "Pixel mapping" patented technology

Method and system for detection and removal of redeyes

Systems and methods for detecting and correcting redeye defects in a digital image are described. In one aspect, the invention proposes a color image segmentation method. In accordance with this method, pixels of the input image are segmented by mapping the pixels to a color space and using a number of segmentation surfaces defined in the color space. Based on segmentation results, candidate redeye pixel regions are further identified. In another aspect, the invention features a method to classify candidate redeye pixel regions into redeye pixel regions and non-redeye pixel regions. In accordance with this method, the candidate redeye pixel regions are processed by a cascade of classification stages. In each classification stage, a plural of attributes are computed for the input candidate redeye pixel region to define a feature vector. The feature vector is feed to a pre-trained binary classifier. A candidate redeye pixel region that passes a classification stage is further processed by a next classification stage, while a region that fails is rejected and dropped from further processing. Only the candidate redeye pixel regions that pass all the classification stages are identified as the redeye pixel regions. In another aspect, the invention describes a set of attributes that are effective in driving classification of redeye pixel regions from non-redeye pixel regions. The invention also describes a scheme to generate a plural of attributes and a machine learning scheme to select best attributes for classification design purpose.
Owner:LUO HUITAO

Multi-view-point image super-resolution method based on deep information

The invention provides a multi-view-point image super-resolution method based on deep information. The multi-view-point image super-resolution method mainly solves the problem that in the prior art, the edge artifact phenomenon occurs when super-resolution reconstruction is conducted on a low-resolution view point image. The method includes the steps of mapping a high-resolution colored image of a view point k on the image position of a view point n according to a pinhole camera model through the deep information, related camera parameters and a backward projection method, conducting effectiveness detection based on pixel mapping relations between the joint view points of the depth differences and the color differences on the projected image, only reserving pixel points conforming to effectiveness detection so as to prevent luminance differences between different view points from influencing illumination regulation conducted on the colored image of the view point k in advance, separating out high-frequency information of the projected image, and adding the high-frequency information to the image obtained after the low-resolution colored image of the view point n is sampled so as to obtain a super-resolution reconstructed image of the view point n. By means of the method, the edge artifact phenomenon is effectively relieved when super-resolution reconstruction is conducted on the low-resolution view point, and the quality of the super-resolution reconstructed image is improved.
Owner:SHANDONG UNIV

Three-dimensional face reconstruction method, system and device for fine structure

The invention belongs to the technical field of image processing and pattern recognition, particularly relates to a three-dimensional face reconstruction method, system and device for a fine structure, and aims to solve the problem of poor three-dimensional face reconstruction precision. The method comprises the steps of obtaining a to-be-reconstructed two-dimensional face image; obtaining a three-dimensional space transformation function and an initial three-dimensional face shape; performing spatial transformation on the initial three-dimensional face shape, and mapping each pixel of an image face region to a UV texture space of a 3DMM model to obtain a UV texture map; obtaining a UV visible image and extracting features to obtain an attention feature map; mapping each point of the initial three-dimensional face shape to a UV texture space to obtain a UV shape graph; multiplying the attention feature map and the UV texture map, and adding the multiplied attention feature map and UV texture map to the UV shape map; and obtaining the update amount of each point of the 3DMM face model, and adding the update amount to each point corresponding to the initial three-dimensional face shape to obtain a three-dimensional reconstruction result. According to the invention, the precision of face model reconstruction is improved.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI

Object-based virtual image drawing method of three-dimensional/free viewpoint television

InactiveCN101695140AImprove drawing speedGuaranteed drawing accuracySteroscopic systemsColor imageObject based
The invention discloses an object-based virtual image drawing method of a three-dimensional/free viewpoint television, which comprises the following steps: dividing a color image into an inner region, a background region and a boundary region of an object through an object mask image of the color image, and then, dividing the color image into a plurality of blocks with different sizes according to the three regions; for a whole mapping type block, only needing to carry out three-dimensional image conversion on one pixel point in the block to determine the coordinate mapping relation for projecting the pixel point from the color image into a virtual viewpoint color image, and then, projecting the whole block into the virtual viewpoint color image by using the coordinate mapping relation, wherein because the three-dimensional image conversion is only carried out on one pixel point, the drawing speed of the whole mapping type block can be effectively improved; and for sequential pixel mapping type blocks, because the blocks are mainly positioned in the boundary region, still mapping each pixel point in the blocks into the virtual viewpoint color image by using a sequential pixel mapping type three-dimensional image conversion method , thereby effectively ensuring the drawing precision.
Owner:上海贵知知识产权服务有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products