Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

535 results about "Image domain" patented technology

Image domain is the domain in which the arrangement and relationship among different gray level intensities (pixels) are expressed.

System for sorting document images by shape comparisons among corresponding layout components

A programming interface of document search system enables a user to dynamically specifying features of documents recorded in a corpus of documents. The programming interface provides category and format flexibility for defining different genre of documents. The document search system initially segments document images into one or more layout objects. Each layout object identifies a structural element in a document such as text blocks, graphics, or halftones. Subsequently, the document search system computes a set of attributes for each of the identified layout objects. The set of attributes are used to describe the layout structure of a page image of a document in terms of the spatial relations that layout objects have to frames of reference that are defined by other layout objects. Using the set of attributes a user defines features of a document with the programming interface. After receiving a feature or attribute and a set of document images selected by a user, the system forms a set of image segments by identifying those layout objects in the set of document images that make up the selected feature or attribute. The system then sorts the set of image segments into meaningful groupings of objects which have similarities and / or recurring patterns. In operation, the system sorts images in the image domain based on segments (or portions) of a document image which have been automatically extracted by the system. As a result, searching becomes more efficient because it is performed on limited portions of a document. Subsequently, document images in the set of document images are order and displayed to a user in accordance with the meaningful groupings.
Owner:XEROX CORP

Image conversion method based on variation automatic encoder and generative adversarial network

The invention provides an image conversion method based on a variation automatic encoder and the generative adversarial network. The method is mainly characterized by comprising the variation automatic encoder (VAE), weight sharing, generating the generative adversarial network (GAN) and learning, in the process, a non-monitored image is utilized to learn a bidirectional conversion function between two image domains in an image conversion network framework (UNIT), VAE and VAE are comprised, modeling for each image domain is carried out through utilizing the VAE and the VAE, mutual action of an adversarial training target and a weight sharing constraint is carried out, corresponding images are generated in the two image domains, the conversion image is associated with an input image of each domain, and image reconstruction flow and image conversion flow problems can be solved through training network combination. The method is advantaged in that the non-monitoring image is utilized to the image conversion framework, images in the two domains having not any relations are made to accomplish conversion, a corresponding training data set formed by the images is not needed, efficiency and practicality are improved, and the method can be developed to non-monitoring language conversion.
Owner:SHENZHEN WEITESHI TECH

Image restoration method based on image segmentation, and system therefor

InactiveCN101661613AOvercoming the matching phenomenonGuaranteed Priority Patching OrderImage enhancementImage analysisMean-shiftDecomposition
The invention discloses an image restoration method based on image segmentation, and a system therefor; the method comprises: firstly, manually selecting and marking the area to be restored in image by a user; then, carrying out image domain decomposition by mean shift algorithm, and dividing the image into a number of areas; finally, carrying out repeated iterative operation on the area to be restored until all pixels in the area to be restored is filled to be full. The method optimizes the calculation of priority in image restoration algorithm, thus effectively preventing the over expansionof the restored image from a high-texture area to a low-texture area; furthermore, matched block searching standard based on the image domain decomposition can be formulated on that basis, so that anerroneous block can be avoided being introduced; compared with the original image restoration method based on the sample, the effect of the method is more in accordance with the visual expectation ofhuman beings; furthermore, at present, the method is successfully applied to large size area restoration of various images with complex texture and structural characteristics as well as the aspects such as wiping off characters, removing target objects and the like.
Owner:BEIJING JIAOTONG UNIV

=Three-dimensional point cloud model classification method based on convolution neural network

The invention discloses a three-dimensional point cloud model classification method based on convolution neural network, includes selecting Princeton ModelNet to generate training set and data set from training data and test data by selecting required number of models from official website according to ModelNet 10 and ModelNet 40 respectively, selecting training data and test data from official website according to Princeton ModelNet, selecting Princeton ModelNet to generate training set and data set according to model Net 10 and ModelNet 40 respectively, and selecting Princeton ModelNet to generate training data and test data. 2, carry out feature analysis on that point cloud model and constructing a classification framework; S3, ordering the point cloud; S4, two-dimensional visualizing the ordered point cloud data; S5, Constructing CNN network for two-dimensional point cloud image. The invention applies the CNN in the image field directly to the classification of the three-dimensional point cloud model for the first time, 93.97% and 89.75% classification accuracy were obtained on ModelNet 10 and ModelNet 40 respectively, Experimental results show that it is feasible to classify 3D point cloud model by using CNN in image domain. PCI2CNN proposed in this paper can capture 3D feature information of point cloud model effectively and is suitable for classification of 3D point cloud model.
Owner:BEIFANG UNIV OF NATITIES

Phase processing method for parallel magnetic resonance imaging

InactiveCN104749538AAvoid noiseAvoid the effects of aliasing artifactsMeasurements using NMR imaging systemsImage domainMR - Magnetic resonance
The invention discloses a phase processing method for parallel magnetic resonance imaging. The phase processing method comprises the following steps of performing Fourier inverse transformation on K spacial data acquired by multi-channel coils in the parallel magnetic resonance imaging to obtain amplitudes and phases of all coil images; constructing a reference coil image, and estimating the spatial sensitivity distribution of all the coils in multiple channels; performing two-dimensional Fourier transformation on the spatial sensitivity distribution of the reference coil image, and intercepting an intermediate matrix as a convolution kernel; constructing a K spacial data convolution model, and solving a joint weight W of the coils; obtaining a K spacial value of a virtual coil and performing Fourier inverse transformation to obtain a virtual coil image; unwrapping a phase and removing the phase of the background of the virtual coil image; extracting the phase of a region of interest by using a mask image. According to the phase processing method disclosed by the invention, phase information of the image is acquired by combining K space with coil data, and the phenomenon that a phase information acquisition algorithm based on an image domain is influenced by noise and aliasing artifact in the reconstruction of the parallel magnetic resonance imaging under the condition of accelerated sampling is avoided.
Owner:ZHENGZHOU UNIVERSITY OF LIGHT INDUSTRY

DIXON water-fat separation method in magnetic resonance imaging

The invention discloses a DIXON water-fat separation method in magnetic resonance imaging. The method comprises the following steps: a) in the magnetic resonance scanning process, three different echo signals including the first echo signal S1, the second echo signal S2 and the third echo signal S3 are acquired, wherein the water-fat signal procession phase difference of the first echo signal S1 and the third echo signal S3 is 2npi, and n is a natural number; b) the acquired k space signals are converted into image signals through Fourier transform; c) effective image signal pixels are extracted in a image domain, the phase diagram of the effective image signal pixel composite signal is extracted, and phase unwrapping is performed on the phase diagram to obtain a static magnetic field distribution diagram delta B0; d) and the water image and the fat image in the echo signals are separated by utilizing the static magnetic field distribution diagram delta B0. According to the DIXON water-fat separation method provided in the invention, the echo time can be set flexibly according to a magnetic resonance imaging sequence and application requirements, so not only data can be processed by applying the conventional DIXON algorithm, and but also the limitation on imaging sequence parameter setting can be reduced at the same time.
Owner:SHANGHAI UNITED IMAGING HEALTHCARE

Synthetic aperture radar anti-deceptive-interference method based on shadow characteristics

The invention provides a synthetic aperture radar anti-deceptive-interference method based on shadow characteristics. Firstly, SAR images with and without shadows of several types of targets under different postures are obtained by using a synthetic aperture radar imaging method and an electromagnetic scattering simulation method. The SAR images obtained at different radar incident angles are used as training samples and test samples for a convolutional neural network. In order to overcome the disadvantage of poor shadow characteristic identification effect of the convolutional network, a first-level convolutional neural network is used to classify the targets and the backgrounds to obtain targets and backgrounds of different types, a standard threshold segmentation method and multi-value processing are employed for key target images to obtain multi-value images after target regions are segmented, and real targets and deceptive targets are distinguished by a convolutional neural network classification method. The functions of SAR automatic target identification and interference target identification are realized, and high-performance SAR anti-deceptive-interference in the image domain is achieved.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products