Dual model fusion-based video behavior segmentation method and apparatus

A video segmentation and behavioral technology, applied in character and pattern recognition, instruments, computer components, etc., can solve problems such as inability to recognize semantics

Active Publication Date: 2018-10-12
BEIJING YINGPU TECH CO LTD
View PDF5 Cites 19 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, these methods above can only complete a rough segmentation, and cannot identify the semantics of each segment in the video

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Dual model fusion-based video behavior segmentation method and apparatus
  • Dual model fusion-based video behavior segmentation method and apparatus
  • Dual model fusion-based video behavior segmentation method and apparatus

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0045] According to the following detailed description of specific embodiments of the application in conjunction with the accompanying drawings, those skilled in the art will be more aware of the above and other objectives, advantages and features of the application.

[0046] Embodiments of the present application provide a video segmentation method, figure 1 is a schematic flowchart of an embodiment of the video segmentation method according to the present application. The method can include:

[0047] S100 segment segmentation step: segment the video into segments based on correlation coefficients between adjacent video frames in the video;

[0048] S200 scene identification step: for the video frame in the segment, identify the scene of the video frame to obtain the scene feature vector;

[0049] S300 local behavior feature recognition step: for the video frame in the segment, identify the local behavior feature of the video frame to obtain a local behavior feature vector;...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a dual model fusion-based video behavior segmentation method and apparatus. The method comprises the steps of segmenting a video into clips based on a correlation coefficient between adjacent video frames in the video; for the video frames in the clips, identifying scenes of the video frames to obtain scene eigenvectors; for the video frames in the clips, identifying localbehavior features of the video frames to obtain local behavior eigenvectors; based on the scene eigenvectors and the local behavior eigenvectors, identifying behavior types of the video frames and confidence degrees corresponding to the behavior types; based on the behavior types and the confidence degrees of the video frames of the clips, determining the behavior types of the clips; and combiningthe adjacent clips with the same behavior types to obtain a segmentation result of the video. According to the method, dual models can be fused at the same time; and the overall behavior informationis extracted by comprehensively utilizing two dimensions including the scenes and local behaviors, so that the video is quickly segmented.

Description

technical field [0001] The present application relates to the field of automatic image processing, in particular to a video behavior segmentation method and device based on two-way model fusion. Background technique [0002] The rapid development of video compression algorithms and applications has brought massive video data. Videos contain a wealth of information. However, due to the huge amount of video data, unlike text that directly expresses abstract concepts, the extraction and structuring of video information is relatively complicated. At present, the video information extraction method is mainly to segment the video first, and then classify and label each segment after segmentation, which is an idea of ​​video information extraction and structuring. Segmenting video based on traditional computer vision generally requires manual design of image features, which cannot flexibly adapt to changes in various scenes. Most of the currently available video segmentation is o...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V20/46G06V20/49G06F18/2411G06F18/25
Inventor 宋波
Owner BEIJING YINGPU TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products