Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

3064 results about "Source image" patented technology

How to find the source of an image: The towel: Go to images.google.com and click the photo icon. Click “upload an image”, then “choose file”. Locate the file on your computer and click “upload”. Scroll through the search results to find the original image. Mine happened to be the first result and those below it led to my first result.

Method for estimating a pose of an articulated object model

ActiveUS20110267344A1Eliminate visible discontinuityMinimize ghosting artifactImage enhancementImage analysisSource imageComputer based
A computer-implemented method for estimating a pose of an articulated object model (4), wherein the articulated object model (4) is a computer based 3D model (1) of a real world object (14) observed by one or more source cameras (9), and wherein the pose of the articulated object model (4) is defined by the spatial location of joints (2) of the articulated object model (4), comprises the steps of
    • obtaining a source image (10) from a video stream;
    • processing the source image (10) to extract a source image segment (13);
    • maintaining, in a database, a set of reference silhouettes, each being associated with an articulated object model (4) and a corresponding reference pose;
    • comparing the source image segment (13) to the reference silhouettes and selecting reference silhouettes by taking into account, for each reference silhouette,
      • a matching error that indicates how closely the reference silhouette matches the source image segment (13) and/or
      • a coherence error that indicates how much the reference pose is consistent with the pose of the same real world object (14) as estimated from a preceding source image (10);
    • retrieving the corresponding reference poses of the articulated object models (4); and
    • computing an estimate of the pose of the articulated object model (4) from the reference poses of the selected reference silhouettes.
Owner:VIZRT AG

High quality wide-range multi-layer image compression coding system

Systems, methods, and computer programs for high quality wide-range multi-layer image compression coding, including consistent ubiquitous use of floating point values in essentially all computations; an adjustable floating-point deadband; use of an optimal band-split filter; use of entire SNR layers at lower resolution levels; targeting of specific SNR layers to specific quality improvements; concentration of coding bits in regions of interest in targeted band-split and SNR layers; use of statically-assigned targets for high-pass and/or for SNR layers; improved SNR by using a lower quantization value for regions of an image showing a higher compression coding error; application of non-linear functions of color when computing difference values when creating an SNR layer; use of finer overall quantization at lower resolution levels with regional quantization scaling; removal of source image noise before motion-compensated compression or film steadying; use of one or more full-range low bands; use of alternate quantization control images for SNR bands and other high resolution enhancing bands; application of lossless variable-length coding using adaptive regions; use of a folder and file structure for layers of bits; and a method of inserting new intra frames by counting the number of bits needed for a motion compensated frame.
Owner:DEMOS GARY

Deinterlacing of video sources via image feature edge detection

ActiveUS7023487B1Reduce artifactsPreserves maximum amount of vertical detailImage enhancementTelevision system detailsInterlaced videoProgressive scan
An interlaced to progressive scan video converter which identifies object edges and directions, and calculates new pixel values based on the edge information. Source image data from a single video field is analyzed to detect object edges and the orientation of those edges. A 2-dimensional array of image elements surrounding each pixel location in the field is high-pass filtered along a number of different rotational vectors, and a null or minimum in the set of filtered data indicates a candidate object edge as well as the direction of that edge. A 2-dimensional array of edge candidates surrounding each pixel location is characterized to invalidate false edges by determining the number of similar and dissimilar edge orientations in the array, and then disqualifying locations which have too many dissimilar or too few similar surrounding edge candidates. The surviving edge candidates are then passed through multiple low-pass and smoothing filters to remove edge detection irregularities and spurious detections, yielding a final edge detection value for each source image pixel location. For pixel locations with a valid edge detection, new pixel data for the progressive output image is calculated by interpolating from source image pixels which are located along the detected edge orientation.
Owner:LATTICE SEMICON CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products