Mass video semantic annotation method based on Spark

A semantic annotation and video technology, applied in video data retrieval, image data processing, instruments, etc., can solve the problem that computing resources cannot support large-scale operations, cannot apply massive video annotation, etc., and achieve the effect of improving scalability.

Active Publication Date: 2014-12-24
THE 28TH RES INST OF CHINA ELECTRONICS TECH GROUP CORP
View PDF4 Cites 40 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] These two methods can achieve good image labeling effects when the amount of data is small and the real-time requirements are not high. However, for massive vid...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Mass video semantic annotation method based on Spark
  • Mass video semantic annotation method based on Spark
  • Mass video semantic annotation method based on Spark

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] The present invention is based on the massive video semantic labeling method of Spark, and the steps are as follows:

[0036] The first step is to establish a Hadoop / Spark massive video big data platform. The platform is composed of three parts: management module, structure module and data module, which are independent of each other to realize flexible storage of massive data. Independent maintenance and upgrade, flexible handling of system redundancy and backup. like figure 2 As shown, the management module provides a set of access interfaces to the operating system (client), mainly including: creating, opening, closing, revoking, reading, writing, rights management, etc. for files and directories. The operating system (client) obtains various services of the data storage system through these access interfaces. The structure module creates corresponding data tables in the database for data files of different structures, and the table describes file attribute informa...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a mass video semantic annotation method based on Spark. The method is mainly based on elastic distributed storage of mass video under a Hadoop big data cluster environment and adopts a Spark computation mode to conduct video annotation. The method mainly comprises the following contents: a video segmentation method based on a fractal theory and realization thereof on Spark; a video feature extraction method based on Spark and a visual word forming method based on a meta-learning strategy; a video annotation generation method based on Spark. Compared with the traditional single machine computation, parallel computation or distributed computation, the mass video semantic annotation method based on Spark can improve the computation speed by more than a hundred times and has the advantages of complete annotation content information, low error rate and the like.

Description

technical field [0001] The invention relates to a video processing method, in particular to a massive video semantic labeling method based on Spark. Background technique [0002] In recent years, with the popularity of multimedia applications and social networks, various multimedia data (text, images, videos, etc.) have shown an exponential explosive growth; these large-scale data provide traditional multimedia research, especially based on video applications and video. Research has brought new challenges and opportunities. How to effectively organize and utilize video data to drive and satisfy users' various personalized needs for video is becoming a research hotspot in the field of computer vision and multimedia. [0003] There is a large gap between the video understood by humans and the video expressed by the underlying visual features, that is, there is a "semantic gap" between the video semantics and visual features. In order to achieve a query method that is closer t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F17/30G06T7/00G06K9/62
CPCG06F16/70G06F16/739G06V20/49G06V10/754
Inventor 崔铜葛军
Owner THE 28TH RES INST OF CHINA ELECTRONICS TECH GROUP CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products