Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

507 results about "Image gradient" patented technology

An image gradient is a directional change in the intensity or color in an image. The gradient of the image is one of the fundamental building blocks in image processing. For example, the Canny edge detector uses image gradient for edge detection. In graphics software for digital image editing, the term gradient or color gradient is also used for a gradual blend of color which can be considered as an even gradation from low to high values, as used from white to black in the images to the right. Another name for this is color progression.

Computer method and apparatus for processing image data

A method and apparatus for image data compression includes detecting a portion of an image signal that uses a disproportionate amount of bandwidth compared to other portions of the image signal. The detected portion of the image signal result in determined components of interest. Relative to certain variance, the method and apparatus normalize the determined components of interest to generate an intermediate form of the components of interest. The intermediate form represents the components of interest reduced in complexity by the certain variance and enables a compressed form of the image signal where the determined components of interest maintain saliency. In one embodiment, the video signal is a sequence of video frames. The step of detecting includes any of: (i) analyzing image gradients across one or more frames where image gradient is a first derivative model and gradient flow is a second derivative, (ii) integrating finite differences of pels temporally or spatially to form a derivative model, (iii) analyzing an illumination field across one or more frames, and (iv) predictive analysis, to determine bandwidth consumption. The determined bandwidth consumption is then used to determine the components of interest.
Owner:EUCLID DISCOVERIES LLC

Fast 3D-2D image registration method with application to continuously guided endoscopy

A novel framework for fast and continuous registration between two imaging modalities is disclosed. The approach makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize the cameras and register the two sources. A disclosed example includes computing or capturing a set of reference images within a known environment, complete with corresponding depth maps and image gradients. The collection of these images and depth maps constitutes the reference source. The second source is a real-time or near-real time source which may include a live video feed. Given one frame from this video feed, and starting from an initial guess of viewpoint, the real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image. The viewpoint is updated via a Gauss-Newton parameter update and certain of the steps are repeated for each frame until the viewpoint converges or the next video frame becomes available. The final viewpoint gives an estimate of the relative rotation and translation between the camera at that particular video frame and the reference source. The invention has far-reaching applications, particularly in the field of assisted endoscopy, including bronchoscopy and colonoscopy. Other applications include aerial and ground-based navigation.
Owner:PENN STATE RES FOUND

Sea surface wind measurement method based on X-band marine radar

ActiveCN102681033AImproved wind direction measurement accuracyImproved wind speed accuracyIndication of weather conditions using multiple variablesIndication/recording movementNormalized radar cross sectionSea temperature
The invention discloses a sea surface wind measurement method based on an X-band marine radar and belongs to the technical field of marine dynamic environment remote sensing. The measurement method comprises three parts of radar image preprocessing, wind direction measurement and wind speed measurement. In the wind direction measurement indexes, image gradient, gray level and smoothing item are organically combined, the proportion of the image gradient, the gray level and the smoothing item is adjusted through proportionality factor and a model suitable for the sea surface wind characteristic is established, compared with the prior art, the wind direction measurement precision is improved by 68.4 percent. In the wind speed measurement indexes, when the radar is used for measuring individually, normalized radar cross section (NRCS), the actually measured wind direction and signal to noise ratio (SNR) serve as back propagation (BP) network input, compared with the traditional algorithm, the wind speed measurement precision is improved by over 84 percent. In the wind speed measurement indexes, sea boundary layer parameters serve as addition input of the BP network, so that the wind speed measurement precision of the marine radar can be further improved, and the measurement precision is improved by over 48 percent by taking air-sea temperature difference, salinity, sea level and atmospheric pressure into consideration.
Owner:哈尔滨哈船导航技术有限公司

Fast 3d-2d image registration method with application to continuously guided endoscopy

A novel framework for fast and continuous registration between two imaging modalities is disclosed. The approach makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize the cameras and register the two sources. A disclosed example includes computing or capturing a set of reference images within a known environment, complete with corresponding depth maps and image gradients. The collection of these images and depth maps constitutes the reference source. The second source is a real-time or near-real time source which may include a live video feed. Given one frame from this video feed, and starting from an initial guess of viewpoint, the real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image. The viewpoint is updated via a Gauss-Newton parameter update and certain of the steps are repeated for each frame until the viewpoint converges or the next video frame becomes available. The final viewpoint gives an estimate of the relative rotation and translation between the camera at that particular video frame and the reference source. The invention has far-reaching applications, particularly in the field of assisted endoscopy, including bronchoscopy and colonoscopy. Other applications include aerial and ground-based navigation.
Owner:PENN STATE RES FOUND

SAR image segmentation method based on wavelet pooling convolutional neural networks

The invention discloses an SAR image segmentation method based on wavelet pooling convolutional neural networks. The SAR image segmentation method comprises 1. constructing a wavelet pooling layer and forming wavelet pooling convolutional neural networks; 2. selecting image blocks and inputting the image blocks into the wavelet pooling convolutional neural networks, and training the image blocks; 3. inputting all the image blocks into the trained networks, and testing the image blocks to obtain a first class mark of an SAR image; 4. performing superpixel segmentation of the SAR image, and blending the superpixel segmentation result with the first class mark of the SAR image to obtain a second class mark of the SAR image; 5. obtaining a third class mark of the SAR image according to a Markov random field model, and blending the third class mark of the SAR image with the superpixel segmentation result to obtain a fourth class mark of the SAR image; and 6. blending the second class mark of the SAR image with the fourth class mark of the SAR image according to an SAR image gradient map to obtain the eventual segmentation result. The SAR image segmentation method based on wavelet pooling convolutional neural networks improves the segmentation effect of the SAR image and can be used for target detection and identification.
Owner:XIDIAN UNIV

Super resolution image reconstruction method based on gradient consistency and anisotropic regularization

The invention discloses a super resolution image reconstruction method based on gradient consistency and anisotropic regularization. The super resolution image reconstruction method based on the gradient consistency and the anisotropic regularization is used for solving super resolution image reconstruction self-adaption to maintain high-frequency image information, and recovering image detail information. The steps includes inputting a low resolution image, obtaining an interpolation image by using dual-three interpolation methods to sample the input image, adopting gradient consistency and anisotropic regularization (GCAR) conditions to restrain an objective function, performing a deconvolution operation for the interpolation image, judging a deconvoluted image whether to meet output requirements, outputting a super resolution result if the deconvoluted image meets the output requirements, otherwise, performing reconvuluting and pixel replacement for the deconvoluted image, going to a next deconvolution operation, and iterating like those until the output requirements are met. The super resolution image reconstruction method based on the gradient consistency and the anisotropic regularization has the advantages of maintaining the gradient consistency of low contrast image area low resolution images and corresponding high resolution images, and capable of recovering image detail information in a self-adaption mode and being used for the field of video applications.
Owner:XIDIAN UNIV

Image blind deblurring method based on edge self-adaption

The invention discloses an image blind deblurring method based on edge self-adaption. To solve the problems that as for an existing total variation deblurring algorithm, edges and details of images are easily blurred, a de-mean gradient total variation canonical model is built, weighting coefficients are calculated in an iterated mode by means of local variance self-adaption of gradients of the images, and the ability of the deblurring algorithm to restore the edges and the details of the images. The image blind deblurring method comprises the following steps that (1) a blurred image is input, solutions to a gradient-region clear image and a blurring kernel are obtained alternately, and the initial blurring kernel of the blurred image is obtained; (2) the initial blurring kernel is used for conducting primary non-blind deblurring on the blurred image, and an initial clear image is obtained; (3) clustering is conducted on the initial clear image, the mean value and the weighting coefficient in the de-mean canonical model are updated, and a solution to the blurring kernel is obtained again; (4) the new blurring kernel is used for conducting secondary non-blind deblurring so as to obtain a clear image. Experimental results show that the image blind deblurring method based on edge self-adaption has better deburring effect than the prior art and can be used for image restoration.
Owner:XIDIAN UNIV

Wireless multimedia sensor network-oriented video compression method

The invention provides a wireless multimedia sensor network-oriented video compression method, which solves the problem of large data volume in video application. Due to the adoption of the method, code rate is reduced and the quality of a decoded image is improved at the same time, and at last, the energy consumption of a node of the sensor is reduced, so that the life cycle of a network is prolonged. In the method, encoding in a strenuous motion area and in a motion edge area is enhanced by adopting an ROI distinguishing algorithm, and the decoded image is postprocessed by adopting a deblocking filter, so that the subjective quality of the decoded image is further improved. On the basis of a Wyner-Ziv distributed video encoding scheme, the strenuous motion area is extracted through an ROI judging criterion based on an image gradient field and based on Huffman encoding and decoding compression is preformed, and the other areas are encoded and decoded based on the LDPC distribution type, so that the method has the advantages of reducing code rate, improving the quality of the decoded image, reducing the processing and transmission energy consumption of the nodes, implementing the optimized transmission of video and prolonging the life cycle of the whole network.
Owner:NANJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products