Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

68results about How to "Scale invariant" patented technology

Target recognition and shape retrieval method based on hierarchical description

The invention discloses a target recognition and shape retrieval method based on hierarchical description. The method comprises the following steps of: extracting the profile feature of a target by a profile extracting algorithm, calculating a curvature value of each point on the profile target, extracting the angular point feature of the target by non-maximum value suppression, taking a profile segment corresponding to every two angular points as an overall feature describer of the target, carrying out hierarchical description on the profile points according to curvature, carrying out hierarchical description on the profile segments according to the importance degrees of value features, combining profile segments, the values of which are lower than evaluation thresholds, to form profile feature segments as partial feature describers of the target, carrying out normalization on the profile feature segments, and carrying out similarity measurement on the profile feature segments of different targets according to Shape Contexts distance. The method can be used for performing feature extraction on a target shape effectively, scale invariance, rotation invariance and translation invariance are achieved, the accuracy rate and the robustness in recognition are improved, and the computation complexity is reduced.
Owner:SUZHOU UNIV

Network killing chain detection method, prediction method and system

ActiveCN112087420AEffective automatic identificationImplementing Unsupervised LearningCharacter and pattern recognitionMachine learningFeature vectorSpectral clustering algorithm
The invention discloses a network killing chain detection method, a network killing chain prediction method and a network killing chain detection and prediction system. The network killing chain detection method specifically comprises the following steps of: (1) constructing a d-dimensional feature vector; (2) screening and subtracting the d-dimensional feature vector into a k-dimensional featurevector by using an unsupervised feature selection algorithm; (3) acquiring a network kill chain attack event sequence set through utilizing the k-dimensional feature vector, wherein in a real scene inwhich IDS alarm log data is subjected to killing chain mining, aiming at the problem that the number of killing chains contained in the data cannot be known in advance, an improved spectral clustering algorithm disclosed by the invention not only can realize unsupervised learning, but further can automatically identify the clustering number compared with other supervised learning methods; (4) based on the obtained network killing chain sequences, performing prediction analysis by adopting a Markov theory and three network killing chain variant models; and (5) realizing the killing chain detection and prediction system based on theoretical analysis.
Owner:XIDIAN UNIV

Sow lactation behavior recognition method based on computer vision

The invention discloses a sow lactation behavior recognition method based on computer vision, which comprises the following steps: 1) collecting overhead video segments of sows and piglets during lactation period; 2) calculating spatio-temporal characteristics such as intensity of move pixel, duty cycle and aggregation degree based on optical flow characteristics, and extracting time serie key frames; 3) using that DeepLab convolution network to segment the sow and the piglet in the key frame, automatically locating the region of inter of the suckling according to the shape matching algorithm,and obtain the spatio-temporal region of interest; 4) a recognition unit is arranged in that spatio-temporal region of interest to extract the motion characteristic of the piglets, including the motion distribution index characteristics of the piglets and the suck motion characteristics of the piglets based on the light flow in the normal direction of the ventral line of the sows; 5) inputting that motion characteristics of the piglet into the SVM classification model to realize automatic recognition of the sow lactation behavior. The invention utilizes the temporal and spatial motion information of the sow and the piglet in the lactation behavior to recognize the sow lactation behavior under the environment of the pig farm, thereby solving the problem that the artificial monitoring of the pig farm is difficult.
Owner:SOUTH CHINA AGRI UNIV

Target detection method and system

The invention discloses a target detection method and system, and the method comprises the following steps: photographing and processing a to-be-detected object, and obtaining a structural feature mapof a to-be-detected image; searching a template graph with the maximum similarity with the structural feature graph of the to-be-detected image from a template graph library, and judging whether theto-be-detected image contains a target object or not according to the similarity; if so, calculating a bounding box of the template graph and a bounding box of a target object in the to-be-detected image; comparing a bounding box of a to-be-detected target object with the obtained bounding box information of the template graph to obtain the positioning of the target object in the to-be-detected image; and displaying the positioning of the target object and giving an alarm. By utilizing the method and the system disclosed by the invention, the scale change and the rotation change of the targetobject in an X-ray image can be adapted, and the method and the system can also be applied to the condition of local object shielding. In addition, the method and the system can complete a target detection task by only needing a small amount of template images containing the target object, and have relatively strong practicability.
Owner:SHANGHAI XINBA AUTOMATION TECH CO LTD

Cloth image retrieval method based on convolutional neural network

The invention discloses a cloth image retrieval method based on a convolutional neural network, and the method comprises the steps: carrying out preprocessing of a textile fabric image, zooming of theimage through bilinear interpolation, and carrying out the normalization and other preprocessing operations; designing a convolutional neural network as a classifier; training the neural network by using a classified loss function and gradient back propagation iteration to obtain a feature extractor; performing feature extraction on the retrieval graph and the fabric library to obtain a 1024-dimensional feature vector; and calculating the similarity of the two feature vectors by adopting an L2 measurement method, and sorting to realize recognition of textile fabric image retrieval. Accordingto the invention, contour spatial position feature extraction can be carried out on the target shape, and recognition of the target with occlusion is realized. The method has scale invariance, rotation invariance and translation invariance, so that the problem of incomplete contour recognition is effectively solved, and the accuracy and robustness of target recognition and shape retrieval are improved.
Owner:苏州正雄企业发展有限公司

SIFT image matching method based on module value difference mirror image invariant property

The invention discloses an SIFT (Scale Invariant Feature Transform) image matching method based on a module value difference mirror image invariant property, which mainly solves the problems that an image matching method is higher in timeliness requirement, and a matching error appears due to the fact that a target is subjected to mirror image turning during a movement course in the existing tracking and recognition technology. As for situations that mirror image matching is weak and the timeliness is poor in the existing method, the method provides an efficient mirror image transformation processing direction, so that mirror image transformation is overcome and an effect of dimensionality reduction is achieved. The method comprises the steps that image information is input; a feature point is extracted; the gradient strength and a direction of the feature point are computed; a principal direction is determined; and coordinates of the feature point are rotated to the principal direction; a 16*16 neighborhood pixel is divided into 16 seed points; every two axisymmetric seed points are subtracted and subjected to modulus taking; eight seed points are obtained; each seed point is drawn into a four-direction histogram; and a 8*4=32 dimensional descriptor is formed finally. The mirror image transformation problem of the matching method is solved, and the original 128-dimensional vector descriptor is reduced to be 32-dimensional, so that the timeliness of the method is improved greatly.
Owner:BEIJING UNIV OF POSTS & TELECOMM

Self-adaptive multi-strategy image fusion method based on riemannian metric

The invention discloses a self-adaptive multi-strategy image fusion method based on Riemannian metric. The method comprises the following steps: (1) carrying out the translation invariant shear wave transformation (SIST) decomposition for a to-be-fused image, and obtaining a low-frequency sub-band coefficient and a series of high-frequency sub-band coefficients; (2) taking different fusion principles respectively for the low-frequency sub-band coefficient and the high-frequency sub-band coefficients, taking a weighted average fusion strategy for the low-frequency sub-band coefficient; taking a self-adaptive multi-strategy fusion based on the riemannian metric for the high-frequency sub-band coefficients, giving Riemannian space non-simularity, calculating a geodesic distance of a Riemannian space formed by the high-frequency sub-band coefficients by utilizing affine invariant metric and Log-Euclidean metric, measuring a complementary redundant attribute of the image, and then obtaining the fused SIST coefficient; and (3) carrying out SIST inversion for the fused coefficient to obtain a fusion image. The invention belongs to the technical field of the image fusion, and the high-efficiency image fusion for multi-source images can be realized.
Owner:SUZHOU UNIV OF SCI & TECH

Remote sensing image scene classification method based on image transformation and BoF model

The invention discloses a remote sensing image scene classification method based on image transformation and a BoF model, and the method comprises the steps of carrying out the partitioning processing of a remote sensing image, obtaining an image block set, carrying out the improved Radon transformation of all image block sets, carrying out the local feature extraction through combining the scale invariant feature transformation SIFT of the image block set, and obtaining a local fusion feature of the improved Radon transform feature and the SIFT feature; secondly, carrying out edge detection on the whole remote sensing image, and improving Radon transformation to obtain global features of the remote sensing image; then, using an improved m-RMR correlation analysis algorithm based on mutual information for carrying out feature optimization on the local fusion features and the global features, removing unfavorable and redundant features, clustering all the features to generate feature words, and using an improved PCA algorithm for carrying out weighted fusion on the feature words to obtain a fusion feature; obtaining a fusion feature word bag model of the local features and the global features; and finally, inputting a support vector machine (SVM) to generate a classifier and realizing remote sensing image scene classification.
Owner:EAST CHINA UNIV OF TECH

Image retrieval method based on feature fusion

The invention discloses an image retrieval method based on feature fusion, which belongs to the field of image retrieval and comprises the following steps of: training a feature extraction network; extracting a multi-layer semantic floating point descriptor of each image in a training image set, and performing hash learning to generate a rotation matrix R; extracting a multi-layer semantic floating point descriptor of each image in an image library, and performing binaryzation after rotation by utilizing the R; classifying the images in the image library by using a classification network; correspondingly storing a binary descriptor and a class probability vector of each image, wherein the extraction of the multi-layer semantic floating point descriptor is implemented by extracting high-layer semantic features and bottom-layer image features of each image and fusing the features, the high-layer semantic features comprise global descriptors, and are extracted in a mode of zooming each image to a plurality of different scales, extracting features by using the feature extraction network and fusing the features, the bottom-layer image features comprise SIFT descriptors, and are extracted in a mode of extracting a plurality of SIFT features of each image and aggregating the SIFT features into VALD. According to the image retrieval method, the descriptor with high distinguishing capability and small occupied space can be constructed.
Owner:HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products