Eureka-AI is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Eureka AI

2668 results about "Identification rate" patented technology

Identification rate. Definition. The identification rate is "[t]he rate at which a biometric subject in a database is correctly identified.".

Method for personalized television voice wake-up by voiceprint and voice identification

The invention discloses a method for personalized television voice wake-up by voiceprint and voice identification, particularly a method for performing identity confirmation on a television user through voiceprint identification and controlling a television to perform personalized voice wake-up through confirmed identity and a voice identification result of user voice, and relates to voiceprint identification and voice identification technologies. A composition system comprises a voice control system (1), an information storage unit (2) and a television main controller (3) which are connected through electric signals. The method has the characteristics of short training time, very high voiceprint and voice identification speed and high identification rate. Voiceprint and voice identification can be finished by only offline training and testing, identification results do not need to be sent to a cloud server, use is convenient, and the safety of family information is guaranteed. The method also can be applied to user-personalized automatic voice channel change of the television, can be transplanted to a common high-speed DSP (digital signal processor) or chip for operation, and can be widely applied to the related fields of smart homes.

Improved multi-instrument reading identification method of transformer station inspection robot

InactiveCN103927507AImprove robustnessMeet the requirements of automatic detection and identification of readingsCharacter and pattern recognitionHough transformScale-invariant feature transform
The invention discloses an improved multi-instrument reading identification method of a transformer station inspection robot. In the method, first of all, for instrument equipment images of different types, equipment template processing is carried out, and position information of min scales and max scales of each instrument in a template database. For the instrument equipment images acquired in real time by the robot, a template graph of a corresponding piece of equipment is scheduled from a background service, by use of a scale invariant feature transform (SIFT) algorithm, an instrument dial plate area sub-image is extracted in an input image in a matching mode, afterwards, binary and instrument point backbone processing is performed on the dial plate sub-image, by use of rapid Hough transform, pointer lines are detected, noise interference is eliminated, accurate position and directional angel of a pointer are accurately positioned, and pointer reading is finished. Such an algorithm is subjected to an on-site test of some domestic 500 kv intelligent transformer station inspection robot, the integration recognition rate of various instruments exceeds 99%, the precision and robustness for instrument reading are high, and the requirement for on-site application of a transformer station is completely satisfied.

Automatic character extraction and recognition system and method for low-resolution medical bill image

The invention discloses an automatic character extraction and recognition system and method for a low-resolution medical bill image. The system comprises an image preprocessing module, a field segmenting module, a single character segmenting module and a character recognizing module. The method comprises the steps of image preprocessing, field area recognizing, character string segmenting and character recognizing and verifying. The automatic character extraction and recognition system and method can be better suitable for automatic character extraction and recognition of the low-resolution medical bill image. The information can be fully utilized by performing layout analysis on a bill. For the image of which the image quality is low and the noise and the image resolution influence are very high, a character string is conveniently segmented into single characters through the semanteme of each field area, and then recognition on the image is converted into recognition on the single characters; for example, an invoice number composed of pure numbers can be recognized through a method special for processing an image only containing numbers, and when the invoice number is recognized, the recognizing range is limited within ten numbers from 0 to 9, and therefore the recognition rate can be greatly increased.

Behavior identification method based on recurrent neural network and human skeleton movement sequences

The invention discloses a behavior identification method based on a recurrent neural network and human skeleton movement sequences. The method comprises the following steps of normalizing node coordinates of extracted human skeleton posture sequences to eliminate influence of absolute space positions, where a human body is located, on an identification process; filtering the skeleton node coordinates through a simple smoothing filter to improve the signal to noise ratio; sending the smoothed data into the hierarchic bidirectional recurrent neural network for deep characteristic extraction and identification. Meanwhile, the invention provides a hierarchic unidirectional recurrent neural network model for coping with practical real-time online analysis requirements. The behavior identification method based on the recurrent neural network and the human skeleton movement sequences has the advantages of designing an end-to-end analyzing mode according to the structural characteristics and the motion relativity of human body, achieving high-precision identification and meanwhile avoiding complex computation, thereby being applicable to practical application. The behavior identification method based on the recurrent neural network and the human skeleton movement sequence is significant to the fields of intelligent video monitoring based on the depth camera technology, intelligent traffic management, smart city and the like.

Apparatus and System for Recognizing Environment Surrounding Vehicle

In conventional systems using an onboard camera disposed rearward of a vehicle for recognizing an object surrounding the vehicle, the object is recognized by the camera disposed rearward of the vehicle. In the image recognized by the camera, a road surface marking taken by the camera appears at a lower end of a screen of the image, which makes it difficult to predict a specific position in the screen from which the road surface marking appears. Further, an angle of depression of the camera is large, and it is a short period of time to acquire the object. Therefore, it is difficult to improve a recognition rate and to reduce false recognition. Results of recognition (type, position, angle, recognition time) made by a camera disposed forward of the vehicle, are used to predict a specific timing and a specific position of a field of view of a camera disposed rearward of the vehicle, at which the object appears. Parameters of recognition logic of the rearwardly disposed camera and processing timing are then optimally adjusted. Further, luminance information of the image from the forwardly disposed camera is used to predict possible changes to be made in luminance of the field of view of the rearwardly disposed camera. Gain and exposure time of the rearwardly disposed camera are then adjusted.

Moving target classification method based on on-line study

InactiveCN101389004AAutomatic judgmentAlgorithms are efficientImage analysisClosed circuit television systemsClassification methodsImage sequence
The invention relates to a method which automatically classifies motion targets learning online, models an image sequence background and detects the motion targets, scene variation, coverage viewing angle and partitioning scene, extracts and clusters characteristic vectors, and marks region classes; the number of the motion targets in a sub-region and certain threshold value initialize Gaussian distribution and prior probability to accomplish initialization of a classifier in accordance with the characteristic vectors of all the motion target regions that pass through the sub-region; the motion targets in the sub-region are classified and parameters of the classifier are online iterated and optimized; classification results in the process of tracking the motion targets are synthesized to output the classification result of the motion result that learns online. The invention is used for detection of abnormalities in monitor scenes, establishing rules for various class targets, enhancing security of monitor system, identifying objects in the monitor scenes, lessening complexity of identification algorithm, improving rate of identification, and for semantized comprehension for the monitor scenes, identifying classes of the motion target and aiding to comprehension for behavior events occurring in the scenes.

PCANet-CNN-based arbitrary attitude facial expression recognition method

The invention discloses a PCANet-CNN-based arbitrary attitude facial expression recognition method. The method comprises the following steps: firstly pre-processing the original images to obtain gray level facial images with uniform size, wherein the gray level facial images comprise front facial images and side facial images; inputting the front face images into an unsupervised characteristic learning model PCANet and learning to obtain characteristics corresponding to the front facial images; inputting the side facial images into a supervised characteristic learning model CNN, and training by taking the front facial characteristics obtained through the unsupervised characteristic learning as labels so as to obtain a mapping relationship between the front facial characteristics and the side facial characteristics; and obtaining uniform front facial characteristics corresponding to the facial images at arbitrary attitudes through the mapping relationship, and finally sending the uniform front facial characteristics into SVM to train so as to obtain a uniform recognition model in allusion to arbitrary attitudes. According to the method provided by the invention, the problem of low model recognition rate caused by the condition of respectively modeling for each attitude in the traditional multi-attitude facial expression recognition and the factors such as attitude and the like is solved, and the correctness of the multi-attitude facial image expression recognition can be effectively improved.

Three-dimensional human face recognition method based on human face full-automatic positioning

The present invention discloses a three-dimensional human face identifying method based on human face full-automatic positioning and belongs to the computer vision and mode identifying field. A human face virtual image generating method comprises the steps as follows: a two-dimensional human face shape model and a partial veins model are established; a two-dimensional human face image is positioned exactly; the two-dimensional human face image is processed for three-dimensional reconstruction according to the positioning result to obtain a three-dimensional human face image; the three-dimensional human face image is processed for illumination model treatment to obtain a virtual image with changeable gestures and illumination. The method comprises the steps as follows: characteristics are picked up from the human face image to be identified and compressed; the human face is identified according to the compressed and processed characteristics. The present invention embodiment generates the virtual image by the three-dimension reconstructing of the two-dimensional human face image and by processing the illumination model, thereby increasing the sample space of the gesture and the illumination change of the image; at the same time, the three-dimension reconstructing speed is improved greatly, thereby ensuring that human face image identification has higher efficiency and recognition rate.
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products