Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

423 results about "Category recognition" patented technology

Method and system for gesture category recognition and training using a feature vector

A computer implemented method and system for gesture category recognition and training. Generally, a gesture is a hand or body initiated movement of a cursor directing device to outline a particular pattern in particular directions done in particular periods of time. The present invention allows a computer system to accept input data, originating from a user, in the form gesture data that are made using the cursor directing device. In one embodiment, a mouse device is used, but the present invention is equally well suited for use with other cursor directing devices (e.g., a track ball, a finger pad, an electronic stylus, etc.). In one embodiment, gesture data is accepted by pressing a key on the keyboard and then moving the mouse (with mouse button pressed) to trace out the gesture. Mouse position information and time stamps are recorded. The present invention then determines a multi-dimensional feature vector based on the gesture data. The feature vector is then passed through a gesture category recognition engine that, in one implementation, uses a radial basis function neural network to associate the feature vector to a pre-existing gesture category. Once identified, a set of user commands that are associated with the gesture category are applied to the computer system. The user commands can originate from an automatic process that extracts commands that are associated with the menu items of a particular application program. The present invention also allows user training so that user-defined gestures, and the computer commands associated therewith, can be programmed into the computer system.
Owner:ASSOCIATIVE COMPUTING +1

Method for constructing public opinion knowledge map based on hot events

The present invention discloses a method for constructing a public opinion knowledge map based on hot events, and belongs to the field of natural language processing. The method comprises: obtaining microblogging texts in real time, processing each microblogging text, constructing text clusters, calculating a topic category to which each text cluster belongs, identifying hot events in each clusterby category, and collecting statistics of multi-dimensional attributes of each hot event; identifying key people and organizations involved in the discussion of the hot events and obtaining the multi-dimensional attributes of the key people and organizations; and constructing a multi-dimensional attribute system and a relationship type among events, people and organizations, taking the relationship among the events, people and organizations as association, and constructing a public opinion knowledge map. According to the method disclosed by the present invention, the hot events, people and organizations can be described from multiple dimensions, and all-directional analysis of hot events, people and organizations can be implemented; and according to the actual needs, the weight of different topic categories can be set, and construction of the public opinion knowledge map of different topics can be realized.
Owner:NAT COMP NETWORK & INFORMATION SECURITY MANAGEMENT CENT

Semantic map construction method based on cloud robot mixed cloud architecture

The invention discloses a semantic map construction method based on cloud robot mixed cloud architecture, and aims to achieve a proper balance for improving object identification accuracy and shortening identification time. The technical scheme of the method is that mixed cloud consisting of a robot, a private cloud node and a public cloud node is constructed, wherein the private cloud node obtains an environment picture shot by the robot and milemeter and position data on the basis of an ROS (Read-Only-Storage) message mechanism, and SLAM (Simultaneous Location and Mapping) is used for drawing an environmental geometric map in real time on the basis of the milemeter and position data. The private cloud node carries out object identification on the basis of an environment picture, and an object which may be wrongly identified is uploaded to the public cloud node to be identified. The private cloud node maps an object category identification tag returned from the public cloud node and an SLAM map, and the corresponding position of the object category identification tag on a map finishes the construction of a semantic map. When the method is adopted, the local calculation load of the robot can be lightened, request response time is minimized, and object identification accuracy is improved.
Owner:NAT UNIV OF DEFENSE TECH

Image recognition model training and image recognition method, device and system

The invention relates to the technical field of computers, in particular to an image recognition model training and image recognition method, device and system, and the method comprises the steps: obtaining a training image sample set which at least comprises strong label training image samples, wherein the strong label training image sample represents an image sample with strong label information, and the strong label information at least comprises the lesion category and the labeling information of the lesion position; extracting image feature information of image samples in the training image sample set; based on the image feature information and the corresponding strong label information, marking the image feature information belonging to each preset lesion category, and training an image recognition model according to the marking result until the strong supervision objective function converges to obtain a trained image recognition model, thereby obtaining a lesion category recognition result of the to-be-recognized image based on the image recognition model. The image feature information of a certain lesion category can be positioned more accurately according to the lesion position, noise is reduced, and reliability and accuracy are improved.
Owner:腾讯医疗健康(深圳)有限公司

Face recognition method and device

ActiveCN103136504AGood multi-feature fusion performanceCharacter and pattern recognitionFeature extractionImage matching
The invention provides a face recognition method and a device. The face recognition method includes a clustering feature extraction step, a determining step, a recognition feature extraction step and a calculation step, wherein the clustering feature extraction step is used for carrying out the clustering feature extraction for a preprocessed face image; the determining step is used for determining a clustering feature category trained and acquired in advance, and the clustering feature category is matched with the face image according to the clustering features extracted from the face image; the recognition feature extraction step is used for extracting the P kinds of the recognition features on the preprocessed face image, wherein the P is a natural number which is greater than one; a calculation step is used for respectively calculating the P kinds of recognition features and the similarity of the corresponding features of the P kinds of recognition features in a face template registered in advance, and determining a best weight combination of the P kinds of the recognition features in the weight fusion according to the determined clustering feature category in the determining step in order to acquire the comprehensive similarity of the face image and the face template. The face recognition method can effectively improve face recognition performance.
Owner:HANVON CORP

Fine granularity classification recognition method and object part location and feature extraction method thereof

ActiveCN104573744AEasy to identifyAccurate Part Positioning AccuracyCharacter and pattern recognitionFeature extractionGranularity
The invention provides a fine granularity classification recognition method and an object part location and feature extraction method thereof. The fine granularity classification recognition method and the object part location and feature extraction method thereof well achieve object part location and feature expression in fine granularity classification recognition. For object part location, a series of part detectors trained by supervised learning are utilized, the methods just detect the part with small deformation in consideration of the posture change and deformation influence of targets to be located, different detectors are trained for the same object part by adopting the posture clustering method, and therefore the posture change of objects is taken into account. For feature expression of the objects or parts, features are extracted at multiple dimensions and multiple positions according to the methods and then fused to be used for final object expression, and therefore the features have certain dimension and translation invariance. According to the methods, object part location and feature expression have certain complementarity at the same time, and therefore the accuracy of fine granularity classification recognition can be effectively improved.
Owner:SHANGHAI JIAO TONG UNIV

Named entity identification method based on neural network and computer storage medium

The invention provides a named entity recognition method based on a neural network and a computer storage medium, and the method comprises the steps: inputting a to-be-recognized character string intoa classification model, recognizing the language intention category of the character string through the classification model, and searching an entity label set corresponding to the recognized language intention category from a preset mapping table; inputting the character string into a named entity model to sequentially identify each character in the character string to obtain probability valuesof a plurality of entity tags to which the words belong in the character string; searching an entity label matched with the entity label of the word contained in the character string from an entity label set corresponding to the language intention category, and aiming at the matched entity label, selecting the entity label of which the probability value ranks top N in the matched entity label setas the entity label of the corresponding character. The incorrect entity tags in the named entity model recognition result are filtered through the language intention category recognition result of the classification model, so that the error recognition rate of the named entity model is reduced.
Owner:ECARX (HUBEI) TECHCO LTD

Shell food freshness determination system based on density model and method of system

InactiveCN105547915AFreshness real-time detectionSpecific gravity measurementCategory recognitionIlluminance
The invention relates to the food field, in particular to a shell food freshness determination system based on a density model and a method of the system. A freshness weigher contains a two-dimensional code for software recognition, a groove for fixing food, a gradienter, a weighing disc for calculating the mass of the shell food and a positioning circle capable of estimating the size of the shell food. Freshness software (an APP, a wechat number, a microblog number and a cloude program) acquires an image through a phone camera under a normal daylight lamp (the illuminance is 100-160 Lux) and performs category recognition, mass recognition and size estimation of the shell food according to the acquired image. An automatic substituting and judging model has the advantage that intelligentilization, feedback real-time transformation and model control parameters can be updated in real time on the networking condition and is mainly used for real-time detection on the freshness of part of the shell food in daily life. Through resolving of the judging model set, a series of judging values and judging conclusions of the shell food freshness are obtained without needing to measure all specific indexes, and information is transmitted in real time. Judging results are judged through an indication interval with a certain confidence degree.
Owner:BEIJING WONDER TECH CO LTD

Human body action recognition method based on cyclic convolutional neural network

The invention discloses a human body action recognition method based on a cyclic convolutional neural network, belongs to the field of image classification, pattern recognition and machine learning, and solves the problems of low human body action recognition precision and the like caused by changes inside action categories and between the categories or video composed of continuous frames. The method comprises: constructing a data set, namely randomly selecting sequence pairs with the same length from a public data set, and each frame in each sequence comprising an RGB image and an optical flow image; constructing a twin network, wherein each network in the twin network sequentially comprises a CNN layer, an RNN layer and a Temporal Pooling layer; constructing an 'identification-verification' joint loss function; training a constructed deep convolutional neural network and an 'identification-verification' joint loss function based on the data set; and based on the to-be-recognized human body action sequence pair, sequentially passing through the trained deep convolutional neural network and the trained 'identification-verification' joint loss function to obtain an action category recognition result of the sequence pair. The method is used for human body action recognition in the image.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

Train station intelligent train loading system

The invention provides a train station intelligent train loading system. The train station intelligent train loading system comprises a belt conveyor, a surge bin, a quantifying bin, a coal chute anda computer. The belt conveyor is provided with a coal quantity online monitoring assembly. The surge bin is provided with a surge bin internal material position monitoring assembly. The quantifying bin is provided with a quantifying bin unloading control assembly, a train carriage category recognition assembly and a train speed online monitoring assembly. A carriage volume and location detection assembly is arranged in each train carriage. A coal chute unloading control assembly is arranged in the coal chute. The above assemblies are connected with the computer. The train station intelligent train loading system can acquire the coal conveying flow in real time and can detect the height of a material in the bin. The computer can determine whether to conduct feeding or not according to the material height data fed back. The train station intelligent train loading system can conduct image capture and carriage number acquisition on the types of train carriages, establish a train loading sample database of each carriage, automatically control opening and closing and the opening degree of a quantifying bin unloading gate, regulate and control the moving speed of a train, automatically control ascending and descending of the coal chute and automatically judge whether the carriages are filled with coal or not.
Owner:LIAONING TECHNICAL UNIVERSITY

Lane line type detection method and early warning device

The lane line type detection method comprises the following steps: (1) shooting a road surface in front of a vehicle body to obtain a road surface image; (2) obtaining a region-of-interest image fromthe pavement image, and performing the following two operations on the region-of-interest image: 1, converting the region-of-interest image into a grayscale image, and then performing image convolution filtering on the grayscale image so as to obtain an edge grayscale image; 2, converting an original RGB image of the region-of-interest image into a Lab color space image; (3) performing line segmentation on the edge grayscale image to obtain multiple lines of segmented images; and respectively identifying lane line marks for each row of segmented images, combining the lane line marks into a complete fitting lane line, and finally performing lane line classification according to the fitting lane line and the Lab color space image. According to the method, various illumination environments can be dealt with, the efficiency can be improved, the calculated amount can be reduced, the error rate can be reduced, an actual lane can be fitted more accurately than a straight line, yellow and white can be distinguished well, the lane line category recognition rate can be improved, and lane lines with different features can be recognized.
Owner:广州鹰瞰信息科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products