Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Video Description Method Based on Object Attribute Relationship Graph

A technology of object attributes and video description, applied in the field of image processing, can solve problems such as a large amount of labeled data, no semantic perception information, features staying at the primary visual level, etc., and achieve good scalability

Active Publication Date: 2022-07-12
CHONGQING UNIV OF TECH
View PDF7 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the feature description method based on deep learning requires a large amount of labeled data, and the extracted features still stay at the primary visual level and do not have semantic perception information.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Video Description Method Based on Object Attribute Relationship Graph
  • A Video Description Method Based on Object Attribute Relationship Graph
  • A Video Description Method Based on Object Attribute Relationship Graph

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] The present invention will be described in further detail below.

[0036] Human understanding of video content begins with the perception of key objects in the video: by observing the apparent characteristics of key objects in the video, analyzing the movement patterns of individual objects and the relationship between multiple objects, people can easily identify these objects. object, and in what context an activity is performed. Inspired by the way humans understand video, the present invention proposes a video semantic-level content description method based on object attribute relation graph. Based on the mechanism of human perception of video scene content, this method represents video as an Object Attribute Relationship Graph (OARG), in which the nodes of the graph represent the objects in the video, and the edges of the graph represent the relationships between objects. The apparent features and motion trajectory features of each object in the video scene are ext...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video description method based on an object attribute relation graph. The method divides a given long video into video segments that are basically consistent in content, and then analyzes the video content based on each video segment to extract The key objects in the video scene, the key objects extracted here are rectangular boxes containing key objects; each key object extracted is used as a node representing the attribute relationship graph of the object corresponding to the video segment. Perform spatial and temporal feature analysis to describe its attributes and use it as a node attribute; take the relative positional relationship between the acquired key objects as the relationship between the two nodes in the object attribute relationship graph corresponding to the video segment. Connect the edges, and use the relative direction, distance, and relative position relationship between two key objects to change in time as edge attributes. The method achieves an effective and refined representation of complex video content within a certain time, and has good scalability.

Description

technical field [0001] The invention relates to the field of image processing, in particular to a video description method based on an object attribute relationship graph. Background technique [0002] At present, video cameras scattered in every corner of the city constitute a huge visual perception network, which provides an important data source for the city security system. In this context, the demand for applications such as content-based video retrieval, object localization and tracking, and object behavior analysis has greatly increased. By constructing a computer program to automatically analyze the video, extract the features and motion trajectories of the "interesting" objects in the scene content, describe the semantic content of the video, and construct the content summary of the visual scene, which is the urgent need for the computer vision perception network to realize perception It will also provide new representation methods and efficient technical means for...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/246G06T7/262
Inventor 冯欣张洁蒋友妮苟光磊龙建武张琼敏石美凤谭暑秋宋承云南海
Owner CHONGQING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products