Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Finger vein recognition method based on multi-semantic feature fusion network

A technology of feature fusion and recognition method, applied in the fields of biometric recognition and information security, finger vein image recognition, can solve problems such as reducing network overfitting and inability to extract semantic features, and achieve the effect of improving the recognition rate

Pending Publication Date: 2021-03-30
HANGZHOU DIANZI UNIV
View PDF0 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This algorithm can reduce the overfitting of the network to a certain extent, but the network is a shallow network and cannot extract higher-level semantic features.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Finger vein recognition method based on multi-semantic feature fusion network
  • Finger vein recognition method based on multi-semantic feature fusion network
  • Finger vein recognition method based on multi-semantic feature fusion network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0025] The specific embodiments of the present invention will be further described below in conjunction with the accompanying drawings.

[0026] Such as figure 1 As shown, the finger vein recognition method based on the multi-semantic feature fusion network includes the following steps:

[0027] S1. Collect finger vein images, perform data enhancement on the finger vein images, and make training sets and test sets. In order to make the trained convolutional neural network model have a better classification effect, it is required that the number of samples in the training sample set should reach a certain scale, and the number of samples in each category should be evenly distributed; if the number of samples is small, it is impossible to obtain a more adaptable model; if the number of samples in each category is unevenly distributed, it will also affect the recognition of small sample categories. The existing finger vein images are processed by rotation, translation, scaling,...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a finger vein recognition method based on a multi-semantic feature fusion network. Firstly, collecting finger vein images, performing data enhancement, and making a training set and a test set; building a feature extraction network which comprises an input layer, an improved residual module, a feature fusion preprocessing module, a pooling layer and a full connection layer;constructing a loss function; training a feature extraction network through the training set; and finally, inputting to-be-classified images in the test set into the trained feature extraction network model, obtaining image features of the to-be-classified images, and performing matching calculation on the features to obtain an identification result. According to the method, the finger vein imagerecognition rate is effectively improved, and compared with other methods, the rejection rate is obviously reduced.

Description

technical field [0001] The invention belongs to the technical field of biological feature recognition and information security, in particular to the field of finger vein image recognition. Background technique [0002] Finger vein recognition technology mainly includes two methods: traditional feature extraction and deep learning feature extraction. [0003] Traditional feature extraction can be roughly divided into four categories of methods: methods based on global features, methods based on local features, and methods based on vein patterns. The method based on global features extracts features from the entire image, such as principal component analysis, two-way two-dimensional principal component analysis, linear discriminant analysis, two-dimensional principal component analysis, and independent component analysis. These methods have the characteristics of low dimensionality and fast recognition speed However, the global feature is greatly affected by factors such as p...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/10G06V40/14G06N3/045G06F18/22G06F18/214G06F18/253
Inventor 王智霖沈雷徐文贵
Owner HANGZHOU DIANZI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products