Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Behavior identification method for long-time fast and slow network fusion based on attitude articulation points

A technology of network fusion and recognition method, applied in the field of behavior recognition of long-term fast and slow network fusion, can solve the problems of inapplicable long video recognition, neglect of spatial structure features, and insufficient expression, so as to avoid loss of compensation information and improve recognition. rate and robustness, the effect of data volume reduction

Active Publication Date: 2019-07-26
NANJING UNIV OF POSTS & TELECOMM
View PDF6 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Early research on behavior recognition methods based on deep learning, such as the two-branch convolutional neural network structure laid the foundation for deep learning in the field of behavior recognition, but it is not suitable for the recognition of long videos, and in the extraction of appearance features by behavior recognition methods , the spatial structure characteristics of the behavior are often ignored. The current algorithm is mainly based on the RGB image for feature extraction, which inevitably substitutes redundant information, making the expression of describing the behavior not fine enough.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Behavior identification method for long-time fast and slow network fusion based on attitude articulation points
  • Behavior identification method for long-time fast and slow network fusion based on attitude articulation points

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0015] The technical solution of the present invention will be further described below in conjunction with the accompanying drawings. It should be understood that the embodiments provided below are only intended to disclose the present invention in detail and completely, and fully convey the technical concept of the present invention to those skilled in the art. The present invention can also be implemented in many different forms, and does not Limited to the embodiments described herein. The terms used in the exemplary embodiments shown in the drawings do not limit the present invention.

[0016] figure 1 It shows the flow chart of the behavior recognition method based on the long-term fast and slow network fusion of posture joint points of the present invention, figure 2 It is a schematic diagram of fast and slow network fusion. Alphapose in the figure is the name of the algorithm used to locate and extract pose joint points of people in RGB images. The extracted results ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a behavior identification method for long-time fast and slow network fusion based on attitude articulation points. The method comprises the following steps: automatically capturing modes of articulation point structure features and track features in a space and time sequence by utilizing a graph convolutional network; generating an overall space-time feature of each video clip through a feature splicing network model, connecting the overall space-time features in series according to a clip sequence to form the overall space-time features of the video, and fusing the RGBfeatures and the attitude joint point features extracted from the input video at a high level of the convolutional network; and outputting the classification result of the video behavior through a support vector machine classifier in a weighted fusion manner. By extracting the features of the attitude joint points, the data size is greatly reduced, and redundant information is removed. Meanwhile,the space-time features in the long-time-history multi-frame image sequence are extracted to carry out feature compensation, so that the recognition rate and robustness of the video complex behavior are improved.

Description

technical field [0001] The invention belongs to the technical field of image recognition, and in particular relates to a behavior recognition method based on long-term fast and slow network fusion of attitude joint points. Background technique [0002] With the development and application of computer science and artificial intelligence, video analysis technology has risen rapidly and received widespread attention. A core of video analysis is human behavior recognition. The performance of a recognition system largely depends on the ability to extract and utilize relevant information from it. However, extracting such information is difficult due to many complexities such as scale changes, viewpoint changes, and camera motions. Therefore, it becomes crucial to design effective features that can address these challenges while preserving categorical information for behavioral categories. In the form of 2D or 3D coordinates, the dynamic skeletal modality can be naturally repres...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/20G06V40/10G06F18/241
Inventor 孙宁郭大双李晓飞
Owner NANJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products