Point-of-interest-position-information-based human body motion identification method in video

A technology of human action recognition and location of interest points, applied in the field of computer vision, can solve problems such as excessive memory requirements and complex calculations, achieve high recognition accuracy, and solve the effects of complex calculations

Inactive Publication Date: 2016-04-20
SOUTH CHINA UNIV OF TECH
View PDF4 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method effectively solves the problems of complex calculations and excessive memory requirements in current human action recognition methods, and can achieve high recognition accuracy at the same time.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Point-of-interest-position-information-based human body motion identification method in video
  • Point-of-interest-position-information-based human body motion identification method in video
  • Point-of-interest-position-information-based human body motion identification method in video

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0072] like figure 1 shown. Firstly, for each video sequence in the video data set, the points of interest of human body movements in the video sequence are extracted; then, the location information of the points of interest is used to intelligently segment it, and the video is divided into several video segments. Then, for each video clip, calculate its interest point position distribution HoP descriptor, and use the HoP descriptor to represent the human body action of the video. The videos can then be trained and tested using methods such as support vector machines, nearest neighbor classifiers, etc. For each test video, it is also intelligently segmented to obtain the human action category to which each video segment belongs, and finally the human action with the highest frequency is taken as the human action represented by the test video.

[0073] Specifically include the following steps:

[0074] S1 For each video sequence in the video data set, extract points of inter...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a point-of-interest-position-information-based human body motion identification method in a video. The method comprises: S1, for each video sequence in a video data set, a point of interest of a motion of a human body in the video sequence is extracted; S2, intelligent splitting is carried out on the video sequence by using the point of interest of the motion of the human body and thus the video data are split into a plurality of video segments; S3, for each video segment, point-of-interest position distribution Hop descriptor of the motion of the human body is calculated, wherein the Hop descriptor expresses the human body motion of the video; S4, human body motion training is carried out by using the Hop descriptor for representing each video segment; and S5, the human body motion with the highest occurrence frequency is used as the human body motion represented by the video data set. According to the method, the Hop descriptors are calculated by using the point-of-interest position information, so that differences among different motions can be kept effectively.

Description

technical field [0001] The invention belongs to the field of computer vision, and in particular relates to a human body action recognition method in a video based on position information of a point of interest. Background technique [0002] With the development of computer technology and multimedia technology, video has become the main carrier of information. In recent years, the increasing popularity of digital products and the rapid development of the Internet have made it easier to create and share videos. On the other hand, the popularity of video surveillance, the popularity of Microsoft Kinect motion-sensing game consoles, and the continuous development of human-computer interaction technology have also brought a variety of videos. Computer vision is playing an increasingly important role by combining video streams with computer processing, enabling computers to understand video information like humans. [0003] Human action recognition is an attractive and challengi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/32
CPCG06V40/23G06V40/20G06V10/25
Inventor 张见威朱林
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products