Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

629results about How to "The recognition result is accurate" patented technology

Named entity identification method and device

The invention provides a named entity identification method and a named entity identification device capable of accurately identifying a named entity, in particular to a named entity in the field of E-business. The method comprises: acquiring a vector library; carrying out word segmentation on a training corpus text string to obtain a plurality of sample words; inquiring the vector library of each sample word sequentially to obtain a first feature vector which comprises a word vector and a word class vector corresponding to the same word as well as an entity marking vector corresponding to the last word of the sample word; taking all the first feature vectors integrally as an input quantity, and training a named entity identification model of a neutral network; carrying out word segmentation on a to-be-predicted text string to obtain a plurality of to-be-tested words; inquiring the vector library of each sample word sequentially to obtain a second feature vector which comprises a word vector and a word class vector corresponding to the same word as well as an entity marking vector corresponding to the last word of the sample word; respectively inputting the second feature vectors corresponding to all the to-be-tested words into the model, and outputting entity identifiers of the to-be-tested words.
Owner:BEIJING JINGDONG SHANGKE INFORMATION TECH CO LTD +1

Calligraphy character identifying method

The invention discloses a calligraphy character identifying method. A signal calligraphy character image is collected and Chinese character semanteme is manually annotated, wherein the Chinese character semanteme corresponds to the signal calligraphy character image. The calligraphy character feature information of the signal calligraphy character image is extracted and stored in a feature data bank after binaryzation processing, denoising processing and normalization processing are conducted to signal calligraphy character image. The feature information comprises four boundary point positions of the calligraphy character of the signal calligraphy character image and average stroke passing numbers, projecting valves and outline points in a horizontal direction and a vertical direction of the calligraphy character. Then, to-be-identified signal calligraphy character image is processed. The feature information of the to-be-identified signal calligraphy character is extracted and shape match is compared after preliminary screening. The calligraphy character is screened in the feature data bank, wherein the shape of the calligraphy character is similar to the shape of the to-be-identified calligraphy character. Finally, weight calculation is conducted to same semanteme calligraphy character image and the same semanteme calligraphy character image is merged. Identifying results are given. The calligraphy character identifying method has the advantages of being small in calculated amount, capable of giving an accuracy identifying result in a short time and capable of having no specific requirements for the to-be-identified calligraphy character image offered by users.
Owner:ZHEJIANG UNIV

Objectionable image distinguishing method integrating skin color, face and sensitive position detection

The invention relates to an objectionable image distinguishing method integrating skin color, face and sensitive position detection. The method comprises the following steps that a skin color model is firstly built, the face detection is carried out, the constituted feature vector of skin color and face features is extracted, a SVM (support vector machine) algorithm is utilized for training, and a SVM classifier is obtained; then, by aiming at the female breast in the local key position of the human body, SIFT (scale-invariant feature transform) features are extracted, an Adaboost algorithm is utilized for training, and an Adaboost classifier is obtained; next, by aiming at the female private parts in the local key position of the human body, the trunk region of the human body is determined, haar-like features are utilized as a template for carrying out searching and matching in the trunk region of the human body; and finally, the SVM classifier, the Adaboost classifier and the template matching method are adopted for carrying out image detection, a C4.5 decision-making tree method is utilized for integrating detection results, a decision-making tree model is built, the decision-making tree model is adopted for recognizing objectionable images, and the final distinguishing results are given. The objectionable image distinguishing method has the advantages that the detection accuracy is improved, and meanwhile, the execution speed is ensured.
Owner:XI AN JIAOTONG UNIV

Invoice recognition method and system based on deep learning

The invention discloses an invoice recognition method and system based on deep learning, relates to the technical field of invoice recognition, and solves the technical problem of inaccurate invoice OCR data acquisition caused by various invoice types and nonstandard invoice pasting in the prior art. The method comprises the following steps: obtaining a plurality of sample reimbursement lists marked with invoice types and position coordinates, and constructing a training data set and a verification data set; based on training data set, combining multiple front networks with a Faster-RCNN framework correspondingly trains a plurality of invoice detection models; carrying out performance verification on each invoice detection model through the verification data set, and screening out an optimal invoice detection model; detecting the invoice reimbursement list by utilizing the optimal invoice detection model, and identifying each invoice image in the invoice reimbursement list and corresponding invoice types and position coordinates; and carrying out character recognition on the invoice images corresponding to the position coordinates by using an OCR model, and packaging and outputtingthe character recognition content, the invoice type and/or the position coordinates of each invoice as invoice recognition results.
Owner:SUNING COM CO LTD

Hand gesture recognition method based on switching Kalman filtering model

The invention discloses a hand gesture recognition method based on a switching Kalman filtering model. The hand gesture recognition method based on a switching Kalman filtering model comprises the steps that a hand gesture video database is established, and the hand gesture video database is pre-processed; image backgrounds of video frames are removed, and two hand regions and a face region are separated out based on a skin color model; morphological operation is conducted on the three areas, mass centers are calculated respectively, and the position vectors of the face and the two hands and the position vector between the two hands are obtained; an optical flow field is calculated, and the optical flow vectors of the mass centers of the two hands are obtained; a coding rule is defined, the two optical flow vectors and the three position vectors of each frame of image are coded, so that a hand gesture characteristic chain code library is obtained; an S-KFM graph model is established, wherein a characteristic chain code sequence serves as an observation signal of the S-KFM graph model, and a hand gesture posture meaning sequence serves as an output signal of the S-KFM graph model; optimal parameters are obtained by conducting learning with the characteristic chain code library as a training sample of the S-KFM; relevant steps are executed again for a hand gesture video to be recognized, so that a corresponding characteristic chain code is obtained, reasoning is conducted with the corresponding characteristic chain code serving as input of the S-KFM, and finally a hand gesture recognition result is obtained.
Owner:XIAN TECHNOLOGICAL UNIV

Cable rope bearing bridge deck vehicle load distribution real-time detection method

The invention discloses a cable rope bearing bridge deck vehicle load distribution real-time detection method. The cable rope bearing bridge deck vehicle load distribution real-time detection method comprises the steps of shooting a bridge deck image; conducting perspective correction processing and enhancement processing on the image so as to obtain a bridge deck image subjected to image enhancement and adopting an edge information based detection method to judge a bridge deck vehicle; tracking a vehicle image and correcting a deformed structure of the vehicle in the acquired image; using the bridge deck as an absolute coordinate system, accurately drawing a running track of an automobile tire in each video picture section on the bridge deck, splicing tracks of the same vehicle in different video pictures according to a same track principle to obtain a running track of each vehicle on the bridge deck and achieving vehicle load real-time tracking. The cable rope bearing bridge deck vehicle load distribution real-time detection method is wide in application range, low in cost, capable of obtaining a detection result in real time and small in environmental influence. By means of a perspective correction and image enhancement technology, high-quality earlier-stage images can be obtained. The vehicle blocking problem caused by image acquisition in an oblique state can be well solved by means of a dynamic template matching technology, and the method has a wide application range.
Owner:CHONGQING UNIV

Traffic sign recognition method and device and training method and device of neural network model

The embodiment of the invention discloses a traffic sign recognition method and device, and a training method and device of a neural network model. The traffic sign recognition method comprises the steps: acquiring position information and category information of a current sub-image of a traffic sign board area in a current road image, wherein the position information and the category informationare obtained by conducting feature extraction on a to-be-recognized traffic sign board image in the current road image through a preset target detection model; according to the position information and the category information, performing feature extraction on the current sub-image by using a convolutional neural network (CNN) to obtain a feature sequence of the current sub-image; and obtaining target semantic information corresponding to the current sub-image according to the feature sequence and a preset convolutional recurrent neural network (CRNN) model, whereinthe CRNN model enables the feature sequence of the image to be associated with the corresponding semantic information. By adopting the technical scheme, the identification precision of the traffic sign is improved.
Owner:MOMENTA SUZHOU TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products