A Video Event Summary Graph Construction and Matching Method Based on Detail Description

A technology of video event and matching method, applied in the field of computer vision and information retrieval, to achieve the effect of improving the probability of converging to the global optimal solution

Active Publication Date: 2016-09-28
BEIHANG UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, the above video event summary graph is only suitable for description matching of simple events with only a few objects

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Video Event Summary Graph Construction and Matching Method Based on Detail Description
  • A Video Event Summary Graph Construction and Matching Method Based on Detail Description
  • A Video Event Summary Graph Construction and Matching Method Based on Detail Description

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0037] The present invention will be described in detail below in conjunction with the accompanying drawings.

[0038] refer to figure 1 The schematic diagram of the event summary graph of the present invention, the event summary graph is an undirected attribute graph, which describes the event action information through the semantic attributes of the nodes, and describes the event layout information through the relationship between nodes, so as to represent the details of a certain video event The action occurrence state in . The circular nodes in the figure represent profile actions, and the dotted edges connecting two profile actions represent role constraint relationships. If two actions are performed by the same character, there is a role constraint relationship between the two action nodes. In addition, each summary graph action and its spatially adjacent actions (square nodes in the figure) form a spatial context relationship, which is represented by the solid line in...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a video event sketch construction and matching method based on detail description, which includes the following steps that each event sketch is defined as an undirected graph model to describe the action attribute set of event details and the role-constraint relation and context between actions; each type of event detail description characteristic of the event sketches is defined and corresponding matching measures are constructed, the matching measure of the event sketches is expressed as the linear combination of all types of characteristic matching measures, and the relevance feedback method is adopted to learn linear weight coefficients. One of the applications of the method is video event detail retrieval, and the principle is as follows: event details inputted by a user and video event details in a library are expressed as event sketches, sketch matching is then carried out under a data-driven Markov Chain Monte Carlo framework, and thereby the process of retrieval is fulfilled. The event sketch provided by the invention can effectively describe single-person or multi-person event details; and the invention also provides a reference method of video event detail matching and retrieval.

Description

technical field [0001] The invention relates to the fields of computer vision and information retrieval, in particular to a method for constructing and matching video event summary graphs based on detailed description. Background technique [0002] Content description and matching of video events is one of the basic problems in the field of computer vision and information retrieval, and has played an important role in content representation, recognition and retrieval applications. Low-level content description and matching methods extract and compare low-level global visual features such as color, texture, and optical flow extracted from videos, and are mainly used in applications such as object template representation and learning, object recognition, and retrieval. Middle-level content description matching is used to characterize video actions by extracting local features, and is applied to applications such as behavior template representation, action recognition, and retr...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F17/30G06K9/64G06K9/66
Inventor 陈小武张宇赵沁平蒋恺
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products