Unlock instant, AI-driven research and patent intelligence for your innovation.

Video character relationship analysis method based on video spatio-temporal context

A technology of spatiotemporal context and character relationship, applied in the field of video character relationship analysis, can solve the problems of redundancy and omission of character relationship, and achieve the effect of improving accuracy, reducing workload and high accuracy

Pending Publication Date: 2021-11-12
NORTHWESTERN POLYTECHNICAL UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, in the prior art, there is a problem of redundant or missing character relationships in the research of building character relationship networks based on videos.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video character relationship analysis method based on video spatio-temporal context
  • Video character relationship analysis method based on video spatio-temporal context
  • Video character relationship analysis method based on video spatio-temporal context

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment

[0146] 1. Video data preprocessing

[0147] a. Face CNN feature pre-training

[0148] In this embodiment, a supervised pre-training is performed using a deep convolutional neural network and a sigmoid loss function to learn generalized discriminant features of human face objects on an offline face data set with marked face categories. The selected deep convolutional neural network is the ResNet-50 network. The dataset used is the VGG-Face2 face recognition dataset (such as figure 2 Shown), VGG-Face2 was published in 2018 and available for public download, with a total of 3.31 million face images, 9131 face categories, and an average of 362 images per face category. And use the current face category dataset to train the CNN network model, and adaptively learn more discriminative face CNN features on the video to be tracked.

[0149] b. Collect sample datasets based on video context spatio-temporal constraints

[0150] Further excavate the spatio-temporal constraint informa...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video character relationship analysis method based on a video spatio-temporal context, which is used for mining a character relationship network in a video through time sequence and spatial information analysis so as to deeply analyze video contents from a new perspective. The method comprises the following steps: firstly, preprocessing video data, including segmenting video shots and scenes, extracting character features and clustering; secondly, calculating the symbiotic relationship of a certain video shot of a person by using a method based on context Gaussian weighting according to a preprocessing result; and finally, fusing the contribution information of the spatial position to the video character relationship, judging and calculating the more specific and accurate symbiotic relationship, and correcting the quantitative result of the time sequence symbiotic relationship method, so the accuracy of video character relationship analysis is improved. According to the method, the efficiency of querying interested targets by video users can be effectively improved, the workload of character query is reduced, and the accuracy of character relationship mining is improved.

Description

technical field [0001] The invention belongs to the technical field of data mining, and in particular relates to a method for analyzing the relationship between video characters. Background technique [0002] Among the many objects in videos such as film and television, the characters in the video are an important part of the video, especially for story videos such as movies and TV, the characters are the main entity of the video, and the plot is promoted and unfolded through the characters. In the video semantic analysis, the research is carried out around the video characters, by tracking the main characters or interested people in the video, mining the relationship between the characters, and obtaining the character relationship network. [0003] In recent years, the construction of character social network has been extensively studied. In 2001, based on the database of scientific papers in physics, biomedical research and computer science, M.Newman et al. used the coope...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06N3/048G06N3/045G06F18/23213
Inventor 张顺梅少辉李昌跃王茹
Owner NORTHWESTERN POLYTECHNICAL UNIV
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More