Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Excavating method of topic actions of man-machine interaction for video analysis

A technology of human-computer interaction and video analysis, applied in the field of image processing, can solve problems such as difficult to find topics

Active Publication Date: 2015-06-10
TSINGHUA UNIV
View PDF1 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

For text information, a piece of text has a clear topic summary, but for video, due to the subjectivity of users, it is difficult to find the most accurate topic, and the topic summary obtained by different users depends entirely on their own subjective intentions

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Excavating method of topic actions of man-machine interaction for video analysis
  • Excavating method of topic actions of man-machine interaction for video analysis
  • Excavating method of topic actions of man-machine interaction for video analysis

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0074] The human-computer interaction theme action mining method for video analysis proposed by the present invention comprises the following steps:

[0075] (1) Extract the feature matrix V of the video sequence to be analyzed, the specific process is as follows:

[0076] (1-1) Suppose the video sequence to be analyzed is I(x, y, t), where x, y are the coordinates of the pixels in the t-th frame image in the image, and Gaussian convolution is performed on the video sequence I to obtain Video image sequence L after Gaussian convolution:

[0077] L ( x , y , t ; σ l 2 , τ l 2 ) = g ( x , y , t ; ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an excavating method of topic actions of man-machine interaction for video analysis, and belongs to the field of an image processing technology. The method comprises the steps of extracting space-time interest points in a video, clustering characteristic descriptors by adopting a K-mean value method by virtue of an HOG (Histogram of Oriented Gradients) and an OFH (Optical Flow Histogram) to further form a bag of words model. A final characteristic matrix is obtained by using a vectoring method, the number of special topic actions can be obtained by a non-negative matrix factorization method with a bound term, and the topic actions are drawn on a whole time axis of a video sequence with different colors. Compared with general non-negative matrix factorization, the partition of the topic actions is more accurate by increasing an edge weight matrix, the bound term and the like. According to the method, based on the non-negative matrix factorization, a user can dig the information of topic actions in the video according to a subjective intention by deleting, adding and fusing algorithms, therefore the subjectivity and accuracy of the topic actions in the video analysis can be ensured.

Description

technical field [0001] The invention relates to a human-computer interaction topic action mining method for video analysis, which belongs to the technical field of image processing. Background technique [0002] In recent years, with the increasing popularity of the Internet, more video clip information is presented. Compared with text, there is more information in video, and it is more difficult for humans to distinguish and summarize subjectively. How users can mine the intrinsic information in the video according to their own subjective intentions is the main difficulty of video action mining. [0003] In the existing technology, the literature [Interest point detection and scale selection in space-time, Ivan Laptev and Tony Lindeberg] uses the feature descriptor of the spatio-temporal interest point to detect the violent part of the video, which has been widely used in action recognition. And combined with the bag of words model to achieve better results. This method ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62
Inventor 刘华平滕辉孙富春
Owner TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products