Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Cross-visual angle gait recognition method based on multitask generation confrontation network

A gait recognition and multi-task technology, applied in the field of machine learning and computer vision, can solve the problem of lack of interpretability of deep models

Active Publication Date: 2017-08-22
FUDAN UNIV
View PDF4 Cites 53 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although the recognition accuracy has been greatly improved compared with previous methods, the deep model lacks interpretability

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cross-visual angle gait recognition method based on multitask generation confrontation network
  • Cross-visual angle gait recognition method based on multitask generation confrontation network
  • Cross-visual angle gait recognition method based on multitask generation confrontation network

Examples

Experimental program
Comparison scheme
Effect test

experiment example 1

[0081] Experimental example 1: Recognition performance of multi-task generative confrontation network

[0082] This part of the experiment shows different models, recognition accuracy in cross-view. As comparative methods, we chose autoencoders, canonical correlation analysis, linear discriminant analysis, convolutional neural networks, and local tensor discriminant models. Table 1 shows the comparison between the method of the present invention and other methods on three datasets. It can be seen that the present invention has a great improvement compared with other methods.

experiment example 2

[0083] Experimental example 2: Effect of different loss functions on model performance

[0084]Table 2 shows the performance change of the model on the CASIA-B dataset when using different loss functions. It can be seen that combining multi-task confrontation loss and pixel-by-pixel loss can improve the recognition performance of the model; while using different loss functions alone will reduce the performance of the model.

experiment example 3

[0085] Experimental Example 3: Effects of Different Walking States on Model Performance

[0086] image 3 Shows the cross-view recognition accuracy on CASIA-B in different walking states. There are three different walking states: normal walking, walking with a bag, and walking with a coat. As can be seen from the figure, the accuracy rate is the highest when walking normally, and the gait sequence wearing a jacket has a more significant reduction in model performance than the gait sequence carrying a bag.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the field of computer vision and machine learning, and particularly relates to a cross-visual angle gait recognition method based on a multitask generation confrontation network. The objective of the invention is to solve a problem of reduced generalization performance of a model under big visual angle change of the gait recognition. The method comprises steps of firstly carrying out pretreatment on each frame of image for an original pedestrian video frame sequence, and extracting gait template features; carrying out gait hidden expression through neural network coding and carrying out angle transformation in a hidden space; generating a confrontation network through multiple tasks and constructing gait template features of other visual angles; and finally using the gait hidden expression to carry out recognition. Compared with methods based on classification or reconstruction, the method has quite strong interpretability and recognition performance can be improved.

Description

technical field [0001] The invention belongs to the technical fields of computer vision and machine learning, and in particular relates to a video-based cross-angle gait recognition method. Background technique [0002] The video-based cross-view gait recognition problem is one of the research problems in the field of computer vision and machine learning. When given a gait video frame sequence under different viewing angles, it is required to judge whether the subjects of the gait frame sequence are the same object according to computer vision or machine learning algorithms. At present, there have been many previous works in this field, and the main methods can be divided into three categories: methods based on reconstruction, methods based on subspace and methods based on deep learning. Here are some references for these three categories of methods: [0003] [1] W.Kusakunniran, Q.Wu, J.Zhang, and H.Li, "Support vector regression for multi-view gait recognition based on lo...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/46G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V20/49G06V10/44G06N3/048G06F18/285
Inventor 何逸炜张军平
Owner FUDAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products