Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

129 results about "Semantic gap" patented technology

The semantic gap characterizes the difference between two descriptions of an object by different linguistic representations, for instance languages or symbols. According to Hein, the semantic gap can be defined as "the difference in meaning between constructs formed within different representation systems". In computer science, the concept is relevant whenever ordinary human activities, observations, and tasks are transferred into a computational representation.

Image retrieval method based on object detection

The invention discloses an image retrieval method based on object detection. The method is used for solving the problem that multiple objects in an image are not retrieved respectively during image retrieval. According to the implementation process of the method, object detection is performed on an image in an image database, and one or more objects in the image are detected; SIFT features and MSER features of the detected objects are extracted and combined to generate feature bundles; a K mean value and a k-d tree are adopted to make the feature bundles into visual words; visual word indexes of the objects in the image database are established through reverse indexing, and an image feature library is generated; and an object detection method is used to make objects in a query image into visual words, similarity compassion is performed on the visual words of the query image and the visual words of the image feature library, and the image with the highest score is output to serve as an image retrieval result. Through the method, the objects in the image can be retrieved respectively, background interference and image semantic gaps are reduced, and accuracy, retrieval speed and efficiency are improved; and the method is used for image retrieval on a specific object in the image, including a person.
Owner:XIDIAN UNIV

Transfer learning-based multi-view commodity image retrieval and identification method

The invention discloses a transfer learning-based multi-view commodity image retrieval and identification method. The method comprises the steps of 1, establishing a multi-view image basic library according to a commodity list, performing fine adjustment on a pre-trained deep residual error network by using a small amount of commodity images through a transfer learning technology, extracting features of the image basic library by using the network, performing dimension reduction on the features, constructing a feature library, and finally according to corresponding relationships among the feature library, the image basic library and commodity types, establishing a mapping table; 2, after to-be-identified commodity images are obtained, extracting features of the images by using the networkand performing dimension reduction; and 3, performing distance measurement on the features of the to-be-identified commodity images and the features of the images in the basic library, taking the mostsimilar image with the shortest distance as a matching result, and through the mapping table, obtaining commodity type names of the to-be-identified commodity images. The features with strong representation capabilities can be automatically extracted; a semantic gap is further broken through; and the retrieval efficiency and the identification precision are improved by only utilizing a small amount of image basic libraries and low-dimensional features.
Owner:XI AN JIAOTONG UNIV

Semantic propagation and mixed multi-instance learning-based Web image retrieval method

The invention belongs to the technical field of image processing and particularly provides a semantic propagation and mixed multi-instance learning-based Web image retrieval method. Web image retrieval is performed by combining visual characteristics of images with text information. The method comprises the steps of representing the images as BoW models first, then clustering the images according to visual similarity and text similarity, and propagating semantic characteristics of the images into visual eigenvectors of the images through universal visual vocabularies in a text class; and in a related feedback stage, introducing a mixed multi-instance learning algorithm, thereby solving the small sample problem in an actual retrieval process. Compared with a conventional CBIR (Content Based Image Retrieval) frame, the retrieval method has the advantages that the semantic characteristics of the images are propagated to the visual characteristics by utilizing the text information of the internet images in a cross-modal mode, and semi-supervised learning is introduced in related feedback based on multi-instance learning to cope with the small sample problem, so that a semantic gap can be effectively reduced and the Web image retrieval performance can be improved.
Owner:XIDIAN UNIV

Salient object-based image retrieval method and system

The invention discloses a salient object-based image retrieval method and system. The method comprises the steps of performing saliency detection on a query image containing a salient object to determine a region where the salient object of the query image is located; determining visual features of the region where the salient object of the query image is located; determining a semantic type of the salient object of the query image; performing similarity measurement on the visual features of the salient object of the query image and visual features of salient objects of images with the same semantic type in an image library, and determining the images, meeting a condition that the similarity between the images and the query image is greater than a similarity threshold, in the image library. According to the method and the system, the image retrieval is carried out through the visual features of the region where the salient object of the image is located, so that the background interference is avoided; by determining the semantic type of the salient object of the query image, the images with different semantic types in the image library are filtered, so that the semantic gap of the image retrieval is reduced, the image retrieval complexity is lowered, and the image retrieval accuracy is further improved.
Owner:NO 54 INST OF CHINA ELECTRONICS SCI & TECH GRP +1

Method for marking picture semantics based on Gauss mixture model

The invention discloses a method for marking picture semantics based on a Gauss mixture model, which belongs to the technical field of image retrieval and automatic image marking. The method comprises the following steps: S1, obtaining a relationship between a low-level visual feature of the image and a semantics concept through monitoring Bayesian learning, and obtaining an image feature set; S2, establishing two Gauss mixture models for each semantics concept by means of an expectation-maximization algorithm, and adding a step of eliminating a noise area; and S3. according to the image feature set, calculating the color posterior probability of the pattern posterior probability of an area layer, arranging the calculated posterior probabilities which belong to all concepts of the image according to a descending order, and obtaining the color ordering value of each concept; similarly, arranging the pattern posterior probabilities and obtaining the pattern ordering value of each concept; and selecting a concept class marking image with a least summation of front R ordering values. According to the method of the invention, the difference between the low-level visual feature of the image and the high-level semantics concept expression is remarkably reduced, thereby effectively settling a semantic gap problem.
Owner:常熟苏大低碳应用技术研究院有限公司

CAD semantic model search method based on design intent

ActiveCN106528770AAddressing the Semantic GapImprove retrieval recallSpecial data processing applicationsSearch wordsNODAL
The invention discloses a CAD semantic model search method based on a design intent. The method comprises the steps of A, establishing a three-dimensional CAD model database and carrying out three-dimensional annotation of the design intent through utilization of a PMI module of UG according to modeling, analyzing and manufacturing features of each model; B, carrying out classification on the annotation information of three-dimensional models according to modeling information, analyzing information and manufacturing information, and establishing a design intent semantic tree of each model; C, establishing a field-based body semantic model tree according to a three-dimensional semantic tree database; D, establishing a search index according to the body semantic tree; E, comparing similarity of a target search word set and semantic tree nodes and returning the same or similar nodes and sub-nodes thereof; and F, calculating corresponding model semantic similarity according to the returned nodes, returning the three-dimensional models with high semantic similarity and feeding back the three-dimensional models to a user. According to the method, the semantic gap problem of the content-based search method is solved, the similarity calculation is carried out by matching target search words and semantic annotation words, and the recall ratio of search is improved.
Owner:DALIAN POLYTECHNIC UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products