Video fine structuring method based on multi-feature fusion

A technology of multi-feature fusion and fine structure, applied in the direction of instruments, character and pattern recognition, computer components, etc., can solve the problem of difficult to take into account the video frame information, etc., to improve computing efficiency, reduce computing complexity, and increase reliability Effect

Active Publication Date: 2019-08-30
ZHEJIANG UNIV
View PDF10 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, a single feature has its own focus, and it is often difficult to take into account both local and overall video frame information, so it is necessary to fuse multiple types of features to construct a feature vector

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video fine structuring method based on multi-feature fusion
  • Video fine structuring method based on multi-feature fusion
  • Video fine structuring method based on multi-feature fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0061] (1) Preliminary detection of video shot boundaries: use HSV color space information that is highly compatible with the human visual system to perform preliminary detection of shot boundaries on frames, and obtain the set of frames at the start and end of the boundary, referred to as the set of first and last frames; The search method selects the first and last frames of the boundary, which reduces the computational complexity of boundary selection and shortens the time consumption. Calculate the color features of the first and last frames of the border, if the difference between the first and last frames of the border is greater than the threshold, continue to search, otherwise stop;

[0062] Among them, the HSV calculation method is as follows:

[0063] (1.1) frame image is converted into HSV color information by RGB color information;

[0064] (1.2) Non-uniform quantization of the three components of HSV, quantized to 8th order, 3rd order and 3rd order, where:

[00...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video fine structuring method based on multi-feature fusion, and belongs to the field of multimedia content processing. The video fine structuring method comprises the following steps of firstly, carrying out video shot boundary initial detection through a color feature HSV; then, establishing a fusion feature vector under the premise of equal-size rectangular partitioning, and introducing an adaptive threshold value to carry out the candidate lens boundary reinspection; clustering the shot sets through a clustering algorithm to carry out scene classification; extracting a key frame in the scene by comparing the difference between the frame and the average feature of the scene; and finally, storing the structured information, such as the shot set, the scene set, the key frame set, etc., into a content server, so that a user can conveniently retrieve information in a database. According to the method, the process time consumption is reduced at design, the selected characteristics conform to the human eye attention characteristics, the local information and the global information are considered, and the accuracy is improved.

Description

technical field [0001] The present application belongs to the fields of multimedia content processing, video segmentation and scene classification, and in particular relates to a video fine-structuring method based on multi-feature fusion. Background technique [0002] With the development of Internet multimedia services, video has become one of the main ways for people to obtain information. Although the video can be subdivided into programs, scenes, shots, and video frames from the perspective of semantic structure, the actual transmitted video is a continuous piece of digital information, and no visual structural analysis has been performed, so the huge video data is difficult for users to retrieve. It caused some trouble. [0003] In the existing video structuring schemes, feature extraction and shot edge detection are mostly performed on video frames, so as to clarify the video hierarchy and facilitate structured analysis and storage. Commonly used features are the co...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00
CPCG06V20/46G06V20/49
Inventor 李晨晗李荣鹏赵志峰张宏纲
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products