Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

166 results about "Pixel value difference" patented technology

Model-based grayscale registration of medical images

Numerical image processing of two or more medical images to provide grayscale registration thereof is described, the numerical image processing algorithms being based at least in part on a model of medical image acquisition. The grayscale registered temporal images may then be displayed for visual comparison by a clinician and/or further processed by a computer-aided diagnosis (CAD) system for detection of medical abnormalities therein. A parametric method includes spatially registering two images and performing gray scale registration of the images. A parametric transform model, e.g., analog to analog, digital to digital, analog to digital, or digital to analog model, is selected based on the image acquisition method(s) of the images, i.e., digital or analog/film. Gray scale registration involves generating a joint pixel value histogram from the two images, statistically fitting parameters of the transform model to the joint histogram, generating a lookup table, and using the lookup table to transform and register pixel values of one image to the pixel values of the other image. The models take into account the most relevant image acquisition parameters that influence pixel value differences between images, e.g., tissue compression, incident radiation intensity, exposure time, film and digitizer characteristic curves for analog image, and digital detector response for digital image. The method facilitates temporal comparisons of medical images such as mammograms and/or comparisons of analog with digital images.
Owner:HOLOGIC INC

Apparatus and method for generating panorama images and apparatus and method for object-tracking using the same

Provided are an apparatus and method for generating panorama images and an apparatus and method for tracking an object using the same. The apparatus for generating panorama images estimates regional motion vectors with respect to respective lower regions set in an image frame, determines a frame motion vector by using the regional motion vectors based on a codebook storing sets of normalized regional motion vectors corresponding to a camera's motion and then accumulates motion mediating variables computed with respect to respective continuous image frames to determine an overlapping region, and matches the overlapping regions to generate a panorama image. The apparatus for tracking an object sets the panorama image generated by the apparatus for generating panorama images as a background image, and determines an object region by averaging pixel value difference values between pixels that constitute a captured input image and corresponding pixels in the panorama image. Regional motion vectors are estimated by using the coordinate of a pixel selected in lower regions, and thus a computing process can be simplified and a process speed can be increased. In addition, effects of wrongly estimated regional motion vectors can be minimized by using a codebook. Furthermore, noises are minimized and an object can be accurately extracted by using averaged pixel value difference values.
Owner:CHUNG ANG UNIV IND ACADEMIC COOP FOUND

Apparatus and method for performing dynamic capacitance compensation (DCC) in liquid crystal display (LCD)

ActiveUS20060098879A1Minimizing chip sizeReducing picture-quality deteriorationCharacter and pattern recognitionCathode-ray tube indicatorsCapacitanceLiquid-crystal display
There are provided an apparatus and method for performing dynamic capacitance compensation (DCC) in a liquid crystal display (LCD). The DCC apparatus includes: a first line buffer reading and temporarily storing pixel values of an image for each line; an encoder transforming and quantizing the pixel values stored for each line for each block and generating bit streams; a memory storing the generated bit streams; a decoder decoding the bit streams stored in the memory for the each block and outputting the decoded bit streams; a second line buffer reading and temporarily storing the decoded pixel values for the each block; and a compensation pixel value detector detecting a compensation pixel value for each pixel, from pixel value differences between pixel values of a current frame stored in the first line buffer and pixel values of a previous frame stored in the second line buffer. Therefore, it is possible to reduce the number of pins of a memory interface by reducing the number of memory device for storing pixel values of image data, required for performing DCC of a LCD, resulting in minimizing a chip size, and to enhance compression efficiency without visual deterioration in images.
Owner:SAMSUNG ELECTRONICS CO LTD

Moving target detection method and device and electronic equipment

The invention provides a moving target detection method and device, and electronic equipment. The method comprises the following steps: determining a current frame image and a specified frame image corresponding to the current frame image from a to-be-detected video according to the sequence of each video frame in the to-be-detected video; determining a detection area of the moving target according to the pixel value difference of the corresponding pixel points between the current frame image and the specified frame image; inputting the image data corresponding to the detection area into a preset target detection model, and outputting an initial detection result corresponding to the detection area; and mapping the initial detection result to the current frame image to obtain a final detection result. According to the embodiment of the invention, the rough detection area of the moving target can be obtained through the pixel value difference between the two frames of images, thereby reducing the detection range of the target detection model, and improving the detection efficiency. Meanwhile, most static targets are eliminated from the detection area, so that the interference on dynamic target detection of the model is reduced, and the detection accuracy is improved.
Owner:BEIJING DIDI INFINITY TECH & DEV

Video de-noising processing method and device

The embodiment of the present invention discloses a video de-noising processing method. The method comprises: obtaining the current frame image of the video; when there is an object frame image, performing pixel value difference calculation of the pixel point of the current frame image and the pixel point of the corresponding position in the object frame image, and obtaining the absolute value of the pixel value difference, wherein the object frame image is the frame image outputted after the last one image of the current frame image is subjected to de-noising processing; determining whether the absolute value is larger than a presetting threshold value or not; if it is determined that the absolute value is larger than a presetting threshold value, updating the pixel point of the corresponding position in the objet frame image to the pixel point in the current frame image; and when the current frame image completes the difference calculation of the presetting number of pixel points, outputting the updated object frame image. The present invention further discloses a video de-noising processing device. According to the invention, the technical problem is solved that the video de-noising algorithm cannot satisfy the requirements of low real-time video flow calculation amount and having no delay of a mobile terminal in the prior art.
Owner:GUANGZHOU BAIGUOYUAN NETWORK TECH

Dynamic face identification system and method

The present invention relates to a dynamic face identification system and method. The system comprises a detection module, a pre-processing module, a feature extraction training module, a feature extraction identification module. The detection module loads a face detector, reads a video stream or a panoramic image to detect the faces and displays, intercepts and saves the detected face images real-timely, the pre-processing module carries out the grey level transformation and the image normalization on the face images, detects by the accumulation of pixel values difference in wavelet domain (APVD) and retains the face images reaching a standard, the feature extraction training module establishes a training sample base and the indexes of the training samples, reads the face images reaching the standard to extract the features, and carries out the PCA feature dimensionality reduction and the BP neural network training, and the feature extraction identification module reads the face images reaching the standard, carries out the normalization, the curve Gabor wavelet (CGW) feature extraction and the PCA dimensionality reduction and identification, and outputs an identification result. According to the present invention, by calculating the fuzzy degree of the images, selecting the face images reaching the identification requirements, and using a curve Gabor wavelet to extract the effective face features, the identification rate is improved.
Owner:桂林远望智能通信科技有限公司

Guided trilateral filtering ultrasonic image speckle noise removal method

A guided trilateral filtering ultrasonic image speckle noise removal method comprises the steps of calculating the space domain distance weight of a guided image through a Gaussian function, and setting the standard deviation of the guided image to be increased along with increase of noise intensity; carrying out Histogram fitting on a local area of the guide image, and selecting a Fisher-Tippettprobability density function selected as a fitting function; Estimating a distribution parameter of the Tippett probability density function by adopting a maximum likelihood method, and calculating adistribution similarity weight according to the estimated parameter; calculating Pixel value difference weight of a guide image by using an exponential function, and setting a scale parameter of the guide image as an estimated Fisher-; Wherein the Tippett distribution parameters are in direct proportion change; And carrying out local iterative filtering on the ultrasonic image by using the three calculated weights, and carrying out iterative convergence to obtain the ultrasonic image with speckle noise removed. According to the method, the filtering weight value is calculated through three aspects of information of the spatial domain distance, the pixel value difference and the distribution similarity, speckle noise can be effectively reduced, meanwhile, detail and edge information of theimage can be better reserved, and therefore visual interpretation of the ultrasonic image is enhanced.
Owner:CHINA THREE GORGES UNIV

Training model generation method and human face detection method and device

An embodiment of the invention discloses a training model generation method comprising the following steps: a plurality of human face positive samples and human face negative samples are obtained, characteristics of pixel difference between each pixel in each sample and other pixels of the sample are calculated, one object pixel difference characteristic is chosen from all the calculated pixel difference characteristics, point coordinates and pixel value difference corresponding to the object pixel difference characteristic are set as decision nodes for a decision making tree and are used for determining and distinguishing the positive samples and the negative samples, the decision nodes of the decision making tree are used as weak classifiers that are then cascaded, steps from a step of choosing the object pixel difference characteristic from all the calculated pixel difference characteristics to a step of using the decision nodes of the decision making tree as the weak classifiers that are then cascaded are subjected to iterative execution operation, and finally a strong classifier is generated. Point coordinates and pixel value difference corresponding to each decision node of the decision tree in the strong classifier are stored, and therefore a training model can be generated. The strong classifier is formed via the decision tree, a human face can be subjected to secondary classifying operation, and time needed for human face detection can be reduced when the same precision remains unchanged.
Owner:GUANGZHOU BAIGUOYUAN NETWORK TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products