Multi-modal interaction behavior identification method based on RGB and three-dimensional skeleton

A recognition method and multi-modal technology, applied in the field of artificial intelligence and computer video understanding, can solve the problems of misclassification probability, small action discrimination, interference, etc., and achieve the effect of improving recognition accuracy

Pending Publication Date: 2021-10-01
ZHONGBEI UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, if multiple modalities are independent of each other, simply concatenating features from different modalities is effective, but concatenation of highly correlated features can adversely affect classification
In this case, decision fusion is more suitable, but the performance of decision fusion depends on the classification probability of each modality, which is easily disturbed by the wrong classification probability
[0003] Interactive action is the most common action in life, such as the interaction between people and objects or between people, but it also has the characteristics of high complexity and high similarity
There are many shared body movements or background environments between different types of interactive actions, and there will be problems with small differences between different interactive actions. For example, in the two actions of eating and drinking, the internal posture and background of the person are the same. The difference is that the objects of human interaction are different, and the discrimination of actions is very small, resulting in a decrease in recognition accuracy
However, using object detection alone to provide interactive object information cannot effectively improve the recognition accuracy.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal interaction behavior identification method based on RGB and three-dimensional skeleton
  • Multi-modal interaction behavior identification method based on RGB and three-dimensional skeleton
  • Multi-modal interaction behavior identification method based on RGB and three-dimensional skeleton

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0015] In order to make the purpose, content and advantages of the present invention clearer, the specific implementation manners of the present invention will be further described in detail below.

[0016] A multi-modal interactive behavior recognition method based on RGB and three-dimensional skeleton proposed by the present invention mainly includes the following steps: video preprocessing, multi-modal spatial relationship, feature extraction and feature fusion of graph convolutional network; Perform preprocessing to extract the information of people and objects in the video, and then use multi-modality to construct the spatial relationship between people and objects from the global to the local; and use the graph convolutional network to extract the corresponding deep features, and finally in the feature layer and decision layer Fusion of various modal features is used to identify human interaction behavior, as follows:

[0017] (1) Video preprocessing: extraction of objec...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a human body interaction behavior recognition method based on RGB and skeleton multi-modality. The method comprises the following steps: firstly, preprocessing a video, extracting human and object information in the video, and then constructing a spatial relationship between a human and an object from global to local by using multi-modality; and extracting corresponding depth features by using a graph convolutional network, and finally fusing each modal feature in a feature layer and a decision-making layer for identifying human body interaction behaviors. RGB information and human body three-dimensional skeleton modal data are utilized, the spatial relationship network model is constructed, the spatial relationship between a person and an object is mined, the multi-modal interaction information between the person and the object is extracted, and the fusion network based on the multi-modal interaction information is established, so that each modal feature is effectively fused, and the interaction behavior identification precision is improved by using the advantages of each mode.

Description

technical field [0001] The invention belongs to the technical fields of computer video understanding and artificial intelligence, and in particular relates to a multimodal interactive behavior recognition method based on RGB and three-dimensional skeletons. Background technique [0002] Early human behavior recognition research was mainly based on RGB video, which was easily affected by factors such as viewing angle changes, illumination changes, and complex backgrounds, making the recognition accuracy unsatisfactory. In recent years, with the development of low-cost depth cameras (such as Microsoft's kinect), people can obtain depth data very easily, and can extract reliable position information of human bone joint points in real time from the depth data. Compared with RGB data, 3D data can provide richer structural information of 3D scenes, and has strong robustness to changes in illumination and scale. Skeleton data is a higher-level motion that includes the position of h...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06N3/045G06F18/2415G06F18/253
Inventor 李传坤李剑郭锦铭韩星程王黎明韩焱
Owner ZHONGBEI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products