Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

63results about How to "Translation invariant" patented technology

Compressive sensing theory-based satellite remote sensing image fusion method

The invention discloses a compressive sensing theory-based satellite remote sensing image fusion method. The method comprises the following steps of: vectoring a full-color image with high spatial resolution and a multi-spectral image with low spatial resolution; constructing a sparsely represented over-complete atom library of an image block with high spatial resolution; establishing a model from the multi-spectral image with high spatial resolution to the full-color image with high spatial resolution and the multi-spectral image with low spatial resolution according to an imaging principle of each land observation satellite; solving a compressive sensing problem of sparse signal recovery by using a base tracking algorithm to obtain sparse representation of the multi-spectral color image with high spatial resolution in an over-complete dictionary; and multiplying the sparse representation by the preset over-complete dictionary to obtain the vector representation of the multi-spectral color image block with high spatial resolution and converting the vector representation into the image block to obtain a fusion result. By introducing the compressive sensing theory into the image fusion technology, the image quality after fusion can be obviously improved, and ideal fusion effect is achieved.
Owner:HUNAN UNIV

Swatch sparsity image inpainting method with directional factor combined

Disclosed is a swatch sparsity image inpainting method with directional factors combined. The swatch sparsity image inpainting method mainly comprises the steps of conducing preprocessing on an image to be inpainted by the utilization of an existing image inpainting algorithm, extracting directional factors in four directions from the preprocessed image through non-subsampled contourlet transform, determining a new structural sparseness function and a new matching criterion according to the color-directional factor weighting distance, determining a filling-in order by means of the structure sparseness function and searching for a plurality of matching blocks according to the new matching criterion, establishing a constraint equation with color space local sequential consistency and directional factor local sequential consistency, optimizing and solving the constraint equation to obtain sparse representation information of the matching blocks, conducting filling, and updating filled-in regions until damaged areas are completely filled in. By means of the swatch sparsity image inpainting method, the consistency of the structure part, the clearness of the texture part and the sequential consistency of neighborhood information can be effectively kept, and the swatch sparsity image inpainting method is particularly applicable to inpainting of real pictures or composite images with complex textures and complex structural characteristics.
Owner:SOUTHWEST JIAOTONG UNIV

Face beauty evaluation method based on deep learning

InactiveCN104636755AImprove accuracyEffective beauty feature expressionCharacter and pattern recognitionSvm classifierStudy methods
The invention provides a face beauty evaluation method based on deep learning. The method comprises the following steps: (1), acquiring a trainer face image set and a tester face image set; (2) learning face beauty characteristics of the trainer face image set by virtue of characteristic learning, and convoluting original images by use of a convolution template so as to form multiple characteristic images; (3) by taking the obtained characteristic images as input, learning a second-layer convolution template by use of a same characteristic learning method, and convoluting the characteristic images obtained in the step (2) by use of the convolution template so as to form multiple characteristic images; (4) performing binarization encoding on the obtained characteristic images, calculating and counting histograms in a local region, and then splicing all the counted histograms of the local region into a face image characteristic.; and (5), quantifying face beauty evaluation into multiple equivalence forms, and classifying by use of an SVM (Support Vector Machine) classifier so as to obtain an evaluation result. According to the method, the face beauty characteristics are automatically learned from a sample by virtue of a deep learning algorithm, so that a computer can intelligently evaluate the face beauty.
Owner:SOUTH CHINA UNIV OF TECH

Method and device for gesture identification based on substantial feature point extraction

The present invention discloses a method and device for gesture identification based on substantial feature point extraction. The device comprises: an extraction module configured to obtain shape of a gesture to be identified, extract an unclosed contour from the edges of the shape of the gesture to be identified and obtain coordinates of all the contour points on the contour; a calculation module configured to calculate the area parameters of each contour point, perform screening of the contour points according to the area parameters, extract the substantial feature points and take the area parameters of a substantial feature point sequence and the point sequence parameters after normalization as the feature parameters of the contour; and a matching module configured to facilitate the feature parameters of the substantial feature points, perform matching of the gestures to be identified and templates in a preset template library, obtain the optimal matching template of the gesture to be identified and determine the type of the optimal matching template as the type of the gesture to be identified. The method and device for gesture identification based on the substantial feature point extraction have good performances such as translation invariance, rotation invariance, scale invariance and hinging invariance while effectively extracting and expressing gesture shape features so as to effectively inhibit noise interference.
Owner:SUZHOU UNIV

Polarimetric SAR classification method on basis of NSCT and discriminative dictionary learning

The invention discloses a polarimetric SAR classification method on the basis of NSCT and discriminative dictionary learning and mainly solves the problems of low classification accuracy and low classification speed of an existing polarimetric SAR image classification method. The polarimetric SAR classification method comprises the following implementing steps: 1, acquiring a coherence matrix of a polarimetric SAR image to be classified and carrying out Lee filtering on the coherence matrix to obtain the de-noised coherence matrix; 2, carrying out Cloude decomposition on the de-noised coherence matrix and using three non-negative feature values of decomposition values and a scattering angle as classification features; 3, carrying out three-layer NSCT on the classification features and using a transformed low-frequency coefficient as a transform domain classification feature; 4, using the transform domain classification feature and combining a discriminative dictionary learning model to train a dictionary and a classifier; 5, using the dictionary and the classifier, which are obtained by training, to classify a test sample so as to obtain a classification result. The polarimetric SAR classification method improves classification accuracy and increases a classification speed and is suitable for image processing.
Owner:XIDIAN UNIV

Laser marking system and method based on feature point extraction algorithm detection

The invention belongs to the technical field of image recognition, and discloses a laser marking system and method based on feature point extraction algorithm detection. The method includes extractingan unclosed contour from the to-be-identified workpiece image; obtaining coordinates of all contour points on the contour; calculating an area parameter of each contour; screening the optimal contourout through the area, calculating the moment and the centroid of the contour, finally, extracting the needed significant feature points through the relation between the contour points and the centroid, so that the shape of the workpiece to be identified is effectively extracted and expressed, and the relation between the feature points and the centroid is calculated to determine the effective marking area, and the final marking precision is improved. Moreover, the latitude of the used characteristic parameters is low, and the invention can guarantee the higher recognition accuracy and efficiency at the same time. The invention has excellent performances such as translation invariance and rotation invariance while effectively extracting and representing the shape characteristics of the workpiece, and can effectively inhibit noise interference.
Owner:WUHAN TEXTILE UNIV

Remote sensing image scene classification method based on image transformation and BoF model

The invention discloses a remote sensing image scene classification method based on image transformation and a BoF model, and the method comprises the steps of carrying out the partitioning processing of a remote sensing image, obtaining an image block set, carrying out the improved Radon transformation of all image block sets, carrying out the local feature extraction through combining the scale invariant feature transformation SIFT of the image block set, and obtaining a local fusion feature of the improved Radon transform feature and the SIFT feature; secondly, carrying out edge detection on the whole remote sensing image, and improving Radon transformation to obtain global features of the remote sensing image; then, using an improved m-RMR correlation analysis algorithm based on mutual information for carrying out feature optimization on the local fusion features and the global features, removing unfavorable and redundant features, clustering all the features to generate feature words, and using an improved PCA algorithm for carrying out weighted fusion on the feature words to obtain a fusion feature; obtaining a fusion feature word bag model of the local features and the global features; and finally, inputting a support vector machine (SVM) to generate a classifier and realizing remote sensing image scene classification.
Owner:EAST CHINA UNIV OF TECH

Grayscale image recognition method, device, apparatus and readable storage medium

The embodiments of the invention disclose a grayscale image recognition method, a grayscale image recognition device, a grayscale image recognition apparatus and a readable storage medium. The methodincludes the following steps that: the network model of the extracted outline pixel points of a grayscale image to be identified is established based on a complex network method and a watershed algorithm; distance threshold values in a distance threshold value set are applied to the node set of the initial network model, corresponding sub-network models are established; the topological parametersof the sub-network models are calculated, so as to form the target recognition parameters of the grayscale image to be identified; the strength of each node in the initial network model is calculated,nodes of which the strength satisfies a preset strength condition are selected from the nodes according to the strength of the nodes, so as to form a target network interest point set; and a candidate sample image which satisfies a preset parameter condition and has the same network interest point can be obtained from a sample library by means of matching according to the target recognition parameters and the target network interest point set, and the category of the candidate sample image is the category of the grayscale image to be identified. With the grayscale image recognition method, device, apparatus and readable storage medium of the invention adopted, the recognition efficiency of grayscale images can be improved.
Owner:GUANGDONG UNIV OF TECH

Method and system for classifying UI (User Interface) abnormal images based on convolutional neural network

The invention discloses a method and a system for classifying UI (User Interface) abnormal images based on a convolutional neural network. The method comprises the steps of enabling a server side to receive to-be-processed UI picture data sent by a client side; calling an abnormality classification model to classify the to-be-processed UI picture data to obtain a picture type of the to-be-processed UI picture data, wherein the abnormality classification model is a convolutional neural network model which is completely trained; and returning the picture type to the client side. According to themethod and the system for classifying the UI (User Interface) abnormal images based on the convolutional neural network, effective features of the UI pictures can be effectively extracted by use of the convolutional neural network, the features do not need to be artificially designed but are learned by training of the convolutional neural network so that the learning features can be guaranteed asa whole to have translation invariance; on the one hand, a certain reusability and universality are possessed, on the other hand, a good classification effect can be achieved through the effective features of the UI pictures, and thus the accuracy for picture classification is greatly improved.
Owner:FUJIAN TQ DIGITAL

A method for image inpainting with sample block sparsity combined with direction factor

Disclosed is a swatch sparsity image inpainting method with directional factors combined. The swatch sparsity image inpainting method mainly comprises the steps of conducing preprocessing on an image to be inpainted by the utilization of an existing image inpainting algorithm, extracting directional factors in four directions from the preprocessed image through non-subsampled contourlet transform, determining a new structural sparseness function and a new matching criterion according to the color-directional factor weighting distance, determining a filling-in order by means of the structure sparseness function and searching for a plurality of matching blocks according to the new matching criterion, establishing a constraint equation with color space local sequential consistency and directional factor local sequential consistency, optimizing and solving the constraint equation to obtain sparse representation information of the matching blocks, conducting filling, and updating filled-in regions until damaged areas are completely filled in. By means of the swatch sparsity image inpainting method, the consistency of the structure part, the clearness of the texture part and the sequential consistency of neighborhood information can be effectively kept, and the swatch sparsity image inpainting method is particularly applicable to inpainting of real pictures or composite images with complex textures and complex structural characteristics.
Owner:SOUTHWEST JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products