Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Unsupervised cross-domain action recognition method based on channel fusion and classifier confrontation

A technology of action recognition and classifier, applied in the field of computer vision and pattern recognition, can solve problems such as weak generalization ability

Pending Publication Date: 2020-10-20
TIANJIN UNIVERSITY OF TECHNOLOGY
View PDF2 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to solve the problem of unlabeled action recognition in the training set of the target data set, which is different from the existing data set. The data sets used in the previous action recognition methods all satisfy the independent and identical distribution of the training set and the test set, generalization weak ability

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Unsupervised cross-domain action recognition method based on channel fusion and classifier confrontation
  • Unsupervised cross-domain action recognition method based on channel fusion and classifier confrontation
  • Unsupervised cross-domain action recognition method based on channel fusion and classifier confrontation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0039] Such as figure 1 As shown, it is an operation flowchart of an unsupervised cross-domain action recognition method (CAFCCN) based on channel fusion and classifier confrontation of the present invention, and the operation steps of the method include:

[0040] Step 10 Selection of Action Recognition Model

[0041] First of all, for action recognition tasks, it is necessary to select an appropriate model.

[0042] In image recognition tasks, the method based on 2D convolution is usually used for recognition, but the method based on 2D convolution cannot be directly applied to the task of action recognition. In action recognition, the method based on 3D convolution simultaneously models the time sequence Information and spatial information, but the 3D convolution has a large amount of parameters, it is impossible to build a deep network, and it is difficult to train. Therefore, the present invention selects a dual-stream-based method for action recognition, obtains input s...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an unsupervised cross-domain action recognition method (CAFCCN) based on channel fusion and classifier confrontation. Efficient action recognition of a target domain test set based on a source domain labeled data set and a target domain unlabeled training set is achieved. The method comprises the following specific steps of: (1) selecting an action recognition model; (2) optimizing a double-flow deep network structure; (3) constructing an objective function based on the double-flow network; (4) building an unsupervised cross-domain action recognition model based on thedouble-flow network; and (5) constructing a data set. The method has the advantages that unlabeled data sets of other training sets can be subjected to efficient action recognition on the basis of theknown data set, and the problem of unlabeled data of the training set of the target data set can be effectively solved. By applying the confrontation method, confusion of categories and domains can be achieved at the same time, domain-level and class-level invariant features are obtained, the convergence speed of the method is high, and efficient recognition of actions can be achieved.

Description

technical field [0001] The invention belongs to the technical field of computer vision and pattern recognition, and relates to an unsupervised cross-domain action recognition method (CAFCCN) based on channel fusion and classifier confrontation. There is no question of labels. Aided by source domain data, the effectiveness of the model is verified when the target domain data training set is unlabeled. Background technique [0002] In recent years, with the rapid development of deep learning technology, many scholars have proposed many action recognition methods based on deep learning technology, which can extract robust video representations. Classical action recognition methods include 3D convolution-based methods and two-stream-based methods. In the 3D convolution-based method, C3D has achieved great success. In the C3D method, the input is a continuous 16-frame image, and the spatial features and temporal features are simultaneously obtained through 3D convolution, and g...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/20G06N3/048G06N3/045G06F18/24G06F18/253G06F18/214
Inventor 高赞赵一博张桦薛彦兵袁立明徐光平
Owner TIANJIN UNIVERSITY OF TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products