Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

224 results about "Visual Word" patented technology

Visual words, as used in image retrieval systems, refer to small parts of an image which carry some kind of information related to the features (such as the color, shape or texture), or changes occurring in the pixels such as the filtering, low-level feature descriptors (SIFT, SURF, ...etc.).

Remote sensing image classification method based on multi-feature fusion

The invention discloses a remote sensing image classification method based on multi-feature fusion, which includes the following steps: A, respectively extracting visual word bag features, color histogram features and textural features of training set remote sensing images; B, respectively using the visual word bag features, the color histogram features and the textural features of the training remote sensing images to perform support vector machine training to obtain three different support vector machine classifiers; and C, respectively extracting visual word bag features, color histogram features and textural features of unknown test samples, using corresponding support vector machine classifiers obtained in the step B to perform category forecasting to obtain three groups of category forecasting results, and synthesizing the three groups of category forecasting results in a weighting synthesis method to obtain the final classification result. The remote sensing image classification method based on multi-feature fusion further adopts an improved word bag model to perform visual word bag feature extracting. Compared with the prior art, the remote sensing image classification method based on multi-feature fusion can obtain more accurate classification result.
Owner:HOHAI UNIV

Nature scene image classification method based on area dormant semantic characteristic

The invention discloses a method for the classification of natural scene images on the basis of regional potential semantic feature, aiming at carrying out the classification of the natural scene images by utilizing the regional potential semantic information of the images and the distribution rule of the information in space. The technical proposal comprises the following steps: firstly, a representative collection of the classification of the natural scene images is established; secondly, sampling point SIFT feature extraction is carried out to the images in the representative collection of the classification of the natural scene images to generate a general visual word list; thirdly, the regional potential semantic model of an image is produced on the representative collection of the classification of the natural scene images; fourthly, the extraction of the regional potential semantic feature of the image is carried out to any image; finally, a natural scene classification model is generate, and classification is carried out to the regional potential semantic feature of the image according to the natural scene classification model. The method inducts the regional potential semantic feature, thus not only describing the regional information of image sub-blocks, but also including the distribution information of the image sub-blocks in space; compared with other methods, the method of the invention can obtain higher accuracy, and no manual labeling is needed, thus having high degree of automation.
Owner:NAT UNIV OF DEFENSE TECH

Sparse dimension reduction-based spectral hash indexing method

The invention discloses a sparse dimension reduction-based spectral hash indexing method, which comprises the following steps: 1) extracting image low-level features of an original image by using an SIFT method; 2) clustering the image low-level features by using a K-means method, and using each cluster center as a sight word; 3) reducing the dimensions of the vectors the sight words by using a sparse component analysis method directly and making the vectors sparse; 4) resolving an Euclidean-to-Hamming space mapping function by using the characteristic equation and characteristic roots of a weighted Laplace-Beltrami operator so as to obtain a low-dimension Hamming space vector; and 5) for an image to be searched, the Hamming distance between the image to be searched and the original image in the low-dimensional Hamming space and using the Hamming distance as the image similarity computation result. In the invention, the sparse dimension reduction mode instead of a spectral has principle component analysis dimension reduction mode is adopted, so the interpretability of the result is improved; and the searching problem of the Euclidean space is mapped into the Hamming space, and the search efficiency is improved.
Owner:ZHEJIANG UNIV

Improved closed-loop detection algorithm-based mobile robot vision SLAM (Simultaneous Location and Mapping) method

The present invention provides an improved closed-loop detection algorithm-based mobile robot vision SLAM (Simultaneous Location and Mapping) method. The method includes the following steps that: S1,Kinect is calibrated through a using the Zhang Dingyou calibration method; S2, ORB feature extraction is performed on acquired RGB images, and feature matching is performed by using the FLANN (Fast Library for Approximate Nearest network); S3, mismatches are deleted, the space coordinates of matching points are obtained, and inter-frame pose transformation (R, t) is estimated through adopting thePnP algorithm; S4, structureless iterative optimization is performed on the pose transformation solved by the PnP; and S5, the image frames are preprocessed, the images are described by using the bagof visual words, and an improved similarity score matching method is used to perform image matching so as to obtain closed-loop candidates, and correct closed-loops are selected; and S6, an image optimization method centering cluster adjustment is used to optimize poses and road signs, and more accurate camera poses and road signs are obtained through continuous iterative optimization. With the method of the invention adopted, more accurate pose estimations and better three-dimensional reconstruction effects under indoor environments can be obtained.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Image retrieval method, device and system

The invention discloses an image retrieval method, an image retrieval device and an image retrieval system, wherein the image retrieval method comprises the following steps: extracting the local features of a query image, and quantizing the local features into visual words; querying a preset visual-word inverted list in an image database by using the visual words so as to obtain matched local-feature pairs and matched images; respectively carrying out space encoding on relative space positions between matched local features in the query image and the matched images so as to obtain a space code picture of the query image and space code pictures of the matched images; executing a space consistency check on the space code picture of the query image and the space code pictures of the matched images so as to obtain the number of the matched local-feature pair in conformity with the space consistency; and according to the numbers of the matched local-feature pairs (in conformity with the space consistency) of different matched images, returning to the matched images according to the similarity of the matched images. By using the method provided by the invention, the image retrieval accuracy and the retrieval efficiency can be improved, and the time consuming for retrieval can be reduced.
Owner:UNIV OF SCI & TECH OF CHINA

Method for realizing quick retrieval of mass videos

The invention relates to a method for realizing the quick retrieval of mass videos. The method comprises the following steps: respectively extracting spatial feature vectors from all frame video images in a video stream of a video library to obtain video feature sequences; extracting key feature vectors from the spatial feature vectors; establishing a distributed storage index database according to the key feature vectors of all video files in the video library; extracting key feature vector sets of videos to be retrieved and extracting video index files of the videos to be retrieved; performing the video similarity retrieving in the distributed storage index database according to the video index files of the videos to be retrieved and outputting video retrieval results of the video files with the similarity larger than the preset value of the system. Through the adoption of the method with the structure, representative visual words are adopted to replace key frames, video information is completely represented, a large amount of redundant of video information does not exist, the video information is very compact, the retrieval speed is increased, and the method has mass data concurrent processing capacity, and is wider in application range.
Owner:SHANGHAI MEIQI PUYUE COMM TECH

Image retrieval method based on object detection

The invention discloses an image retrieval method based on object detection. The method is used for solving the problem that multiple objects in an image are not retrieved respectively during image retrieval. According to the implementation process of the method, object detection is performed on an image in an image database, and one or more objects in the image are detected; SIFT features and MSER features of the detected objects are extracted and combined to generate feature bundles; a K mean value and a k-d tree are adopted to make the feature bundles into visual words; visual word indexes of the objects in the image database are established through reverse indexing, and an image feature library is generated; and an object detection method is used to make objects in a query image into visual words, similarity compassion is performed on the visual words of the query image and the visual words of the image feature library, and the image with the highest score is output to serve as an image retrieval result. Through the method, the objects in the image can be retrieved respectively, background interference and image semantic gaps are reduced, and accuracy, retrieval speed and efficiency are improved; and the method is used for image retrieval on a specific object in the image, including a person.
Owner:XIDIAN UNIV

Dictionary learning method, visual word bag characteristic extracting method and retrieval system

The invention provides a dictionary learning method. The dictionary learning method includes 1), dividing local characteristic vector of images into first segments and second segments on the basis of dimensionality; 2) establishing a first data matrix by the first segments of a plurality of local characteristic vectors, and establishing a second data matrix by the second segments of a plurality of local characteristic vectors; 3) subjecting the first data matrix to sparse non-negative matrix factorization to obtain a first dictionary sparsely coding the first segments of the local characteristic vectors; subjecting the second data matrix to sparse non-negative matrix factorization to obtain a second dictionary sparsely coding the second segments of the local characteristic vectors. The invention further provides a visual word bag characteristic extracting method for sparsely indicating the local characteristic vectors of the images segment by segment on the basis of the dictionaries and provides a corresponding retrieval system. Memory usage can be greatly reduced, wordlist training time and characteristic extraction time are shortened, and the dictionary learning method is particularly suitable for mobile terminals.
Owner:INST OF COMPUTING TECH CHINESE ACAD OF SCI

Remote sensing image land utilization scene classification method based on two-dimension wavelet decomposition and visual sense bag-of-word model

The invention relates to a remote sensing image land utilization scene classification method based on two-dimension wavelet decomposition and a visual sense bag-of-word model. The method comprises the steps that a remote sensing image land utilization scene classification training set is built; scene images in the training set are converted to grayscale images, and two-dimension decomposition is conducted on the grayscale images; regular-grid sampling and SIFT extracting are conducted on the converted grayscale images and sub-images formed after two-dimension decomposition, and universal visual word lists of the converted grayscale images and the sub-images are independently generated through clustering; visual word mapping is conducted on each image in the training set to obtain bag-of-word characteristics; the bag-of-word characteristics of each image in the training set and corresponding scene category serial numbers serve as training data for generating a classification model through an SVM algorithm; images of each scene are classified according to the classification model. The remote sensing image land utilization scene classification method well solves the problems that remote sensing image texture information is not sufficiently considered through an existing scene classification method based on a visual sense bag-of-word model, and can effectively improve scene classification precision.
Owner:INST OF REMOTE SENSING & DIGITAL EARTH CHINESE ACADEMY OF SCI

Remote sensing image classification and retrieval method

The invention belongs to the technical field of digital image processing and particularly relates to a remote sensing image classification and retrieval method. The remote sensing image classificationand retrieval method comprises the following technological processes: constructing a training data set, constructing a scale space, constructing local features, constructing global features, constructing regularized classification features, constructing a to-be-classified data set, constructing a retrieval visual word bag and retrieving remote sensing images. The remote sensing image classification and retrieval method provided by the invention has the benefits that the local features of classification are found on different scale spaces of the remote sensing images by adopting scale-invariant feature transform; the global features of the remote sensing images are constructed by adopting a generalized search tree, and a Gaussian weight function is introduced for the regularization fusionof the local features and the global features; the classification of the remote sensing images is developed on the basis of the regularized fusion of the local features and the global features, so that the retrieval of the images is finally realized; a principle is scientific and reasonable; the fusion of the local features and the global features of the remote sensing images can comprehensively depict the multi-scale, spatial and textural features of the remote sensing images, so that ground features of different sizes and orientations are completely classified.
Owner:崔植源

Aurora image classification method based on latent theme combining with saliency information

InactiveCN103632166AImprove uniformityAvoid the pitfall of extracting its featuresCharacter and pattern recognitionSupport vector machineDocumentation procedure
The invention discloses an aurora image classification method based on a latent theme combining with saliency information, and mainly solves the problem that existing technical classification is low in accuracy and classification efficiency and narrow in application range. The method includes the implementation steps: (1) preprocessing an aurora image, extracting visual words of the preprocessed aurora image and generating a visual documentation; (2) using a spectral residual algorithm to acquire an aurora saliency map of the inputted aurora image, extracting visual words of the aurora saliency map and generating a visual document of the aurora saliency map; (3) connecting the visual documents in the step (1) and the step (2) to generate a semantic enhanced document of the aurora image, and inputting the semantic enhanced document of the aurora image to a Latent Dirichlet Allocation model to obtain saliency information latent semantic distribution characteristics SM-LDA of the aurora image; (4) inputting the SM-LDA characteristics into a support vector machine for classification so as to obtain a final classification result. By the method applicable to scene classification and target recognition, high classification accuracy is maintained, meanwhile, classification time is shortened, and classification efficiency is improved.
Owner:XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products