Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Palm image recognition method, system and device

An image recognition, palm technology, applied in the field of image recognition, can solve the problem of low accuracy

Active Publication Date: 2020-07-07
厦门熵基科技有限公司
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The present invention provides a palm image recognition method, system and equipment, which are used to solve the technical problem of low accuracy when using a small network model to detect the palm in the image in the prior art

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Palm image recognition method, system and device
  • Palm image recognition method, system and device
  • Palm image recognition method, system and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0050] See figure 1 , figure 1 It is a method flowchart of a palm image recognition method, system and device provided by the embodiments of the present invention.

[0051] The palm image recognition method provided by the present invention is suitable for a pre-established MobileNet neural network model. The MobileNet neural network model uses the MaxMin function as the activation function. The method includes the following steps:

[0052] Obtain different palm images in the visible light scene. Different palm images include different people, different lights, different angles, and different ages; by selecting different palm images for subsequent training of the MobileNet neural network model, the MobileNet neural network model can recognize Expand the scope of application of the palm in the palm image under different circumstances;

[0053] Annotate the key point information of the palm area in the palm image, and annotate the key point information of the palm in the palm image, ...

Embodiment 2

[0070] In this embodiment, when training the MobileNet neural network model, it can be divided into the following three parts: (1) Model input: palm image and annotated 9 palm key point pairs; (2) model structure: for palm detection Task, the palm detection CNN framework built, that is, MobileNet is selected as the backbone, the multi-scale sampling fusion operation of the SSD model is the detection head, and the activation function mapping method in the convolution operation module in the MobileNet and SSD structure is changed from the ReLU function to MaxMin function; (3) Model training: According to the back propagation rules, in the forward propagation learning process of image feature information, the information after the convolution linear operation is subjected to the non-linear operation of MaxMin activation function, and its feature value is not easy to lose, so the parameter When backpropagating, the gradient value is not easy to disappear, and there is less overfitti...

Embodiment 3

[0074] Such as figure 2 As shown, a palm image recognition system includes a palm image acquisition module 201, an information labeling module 202, a MobileNet neural network model module 203, a training module 204, and a real-time image input module 205;

[0075] The palm image acquisition module 201 is used to acquire different palm images in a visible light scene;

[0076] The information labeling module 202 is used for labeling key point information of the palm area in the palm image;

[0077] The MobileNet neural network model module 203 is used to provide a MobileNet neural network model, and the MobileNet neural network model uses the MaxMin function as the activation function;

[0078] The training module 204 is used to input the labeled palm image into the MobileNet neural network model for training;

[0079] The real-time image input module 205 is used to input real-time images into the trained MobileNet neural network model.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a palm image recognition method, a system and a device. The method is suitable for a pre-established MobileNet neural network model, the MobileNet neural network model uses a MaxMin function as an activation function, and the method comprises the following steps: acquiring different palm images in a visible light scene and marking key point information of a palm region; inputting the marked palm image into a MobileNet neural network model for training to obtain a trained MobileNet neural network model; and inputting the real-time image into a trained MobileNet neural network model, and outputting a palm recognition result in the image by the MobileNet neural network model. According to the method, the palm in the image is identified by using the MobileNet neural network model; in a MobileNet neural network model, a MaxMin function is used as an activation function; according to the method, the MobileNet neural network model is enabled to learn more nonlinear features, the better model performance is obtained, the situation of gradient disappearance caused by neuronal necrosis in the model training process is effectively reduced, and thus the palm image recognition precision is improved.

Description

Technical field [0001] The present invention relates to the technical field of image recognition, in particular to a palm image recognition method, system and equipment. Background technique [0002] At present, the identity verification system usually includes the process of biometric detection, registration, and identification. Among them, whether the biometrics can be detected correctly is crucial to the performance of the verification system. At present, the detection of biological features is usually implemented by traditional image processing and deep learning. However, the former is a feature template designed by humans, and the learning process is computationally intensive and time-consuming; the latter is to design a convolutional neural network (CNN) To realize the characteristic information of self-learning samples, this method has strong applicability. In addition, in recent years, a lot of research has been carried out by academia and industry. The calculation parame...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06N3/04G06N3/08
CPCG06N3/084G06V40/1347G06V40/1365G06N3/045Y02T10/40
Inventor 蔡小红陈书楷
Owner 厦门熵基科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products