Lable feature near-duplicated video detection method based on convolutional neural network semantic classification

A convolutional neural network and classification labeling technology, applied in the field of multimedia information processing, can solve the problems of detection efficiency of missing semantic features, etc., and achieve the effects of efficient video similarity matching, guaranteed features, and small storage space

Active Publication Date: 2020-09-29
XI AN JIAOTONG UNIV
View PDF6 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The present invention solves the problems of lack of semantic features and detection efficiency in the prior art by means of deep learning

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Lable feature near-duplicated video detection method based on convolutional neural network semantic classification
  • Lable feature near-duplicated video detection method based on convolutional neural network semantic classification
  • Lable feature near-duplicated video detection method based on convolutional neural network semantic classification

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0039] The implementation of the method of the present invention will be described in detail below in conjunction with the drawings and examples.

[0040] Such as figure 1 As shown, it is an overall flowchart of the implementation process of the present invention. The present invention provides a near-repetitive video detection method based on the label features of convolutional neural network semantic classification. The method first extracts dense semantic classification label features from the video; secondly, according to the same The repetition between the video frame label features of a video is used to remove the redundancy of the features, and the semantic classification label features of the video are obtained; then the similarity matching is performed on the feature vectors of the query video and the library video; finally, it is measured by calculating the Jaccard coefficient The similarity between two videos can be used to detect near-duplicate videos. Among them,...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a label feature near-duplicate video detection method based on convolutional neural network semantic classification, and aims to solve the problems of large feature storage space, low retrieval efficiency and the like in the existing near-duplicate video retrieval field. The method comprises the following steps: firstly, extracting dense semantic classification label features from a video by utilizing a deep convolutional neural network model; removing redundancy according to repeatability among the video frame label features to obtain semantic classification label features of the video; carrying out similarity matching on the feature vectors of the query video and the library video; and finally, measuring the similarity of the two videos by calculating a Jaccard coefficient so as to achieve the detection of the nearly repeated videos, wherein the two steps of video tag feature redundancy elimination and feature matching have two implementation modes of a videolevel and a frame level, namely, nearly repeated video detection based on semantic classification tag features can be achieved through two different levels of methods. According to the invention, near-repetitive video detection can be rapidly realized, and the method has certain robustness for video editing transformation and noise.

Description

technical field [0001] The invention belongs to the field of multimedia information processing, and in particular relates to a near-duplicate video detection method based on label features of convolutional neural network semantic classification. Background technique [0002] With the vigorous development of Internet technology, video, as the carrier of information, plays an increasingly important role in information expression and information transmission; and the rapid development of video capture equipment and video editing software, users can more easily obtain and edit , Share video, online video shows explosive growth. Take YouTube, the world's largest video website, as an example. Currently, more than 500 hours of videos are uploaded every minute on the website, with more than 1.8 billion monthly active users. However, in this massive video data, there are a large number of identical or similar videos. According to relevant research results, when searching based on 2...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06F16/75G06F16/783
CPCG06F16/75G06F16/7847G06F16/785G06F16/7857G06V20/41G06V20/46G06F18/22G06F18/214G06F18/2415
Inventor 王萍梁思颖
Owner XI AN JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products