Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A spark-based semantic annotation method for massive videos

A semantic annotation and video technology, applied in video data retrieval, image data processing, instruments, etc., can solve the problems that computing resources cannot support large-scale calculations, cannot apply massive video annotations, etc., and achieve the effect of improving scalability

Active Publication Date: 2017-04-12
THE 28TH RES INST OF CHINA ELECTRONICS TECH GROUP CORP
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] These two methods can achieve good image labeling effects when the amount of data is small and the real-time requirements are not high. However, for massive video resources, the computing resources of a single machine obviously cannot support large-scale calculations, so the algorithm cannot be applied to Massive video annotations

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A spark-based semantic annotation method for massive videos
  • A spark-based semantic annotation method for massive videos
  • A spark-based semantic annotation method for massive videos

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] The present invention is based on the massive video semantic labeling method of Spark, and the steps are as follows:

[0036] The first step is to establish a Hadoop / Spark massive video big data platform. The platform is composed of three parts: management module, structure module and data module, which are independent of each other to realize flexible storage of massive data. Independent maintenance and upgrade, flexible handling of system redundancy and backup. Such as figure 2 As shown, the management module provides a set of access interfaces to the operating system (client), mainly including: creating, opening, closing, revoking, reading, writing, rights management, etc. for files and directories. The operating system (client) obtains various services of the data storage system through these access interfaces. The structure module creates corresponding data tables in the database for data files of different structures, and the table describes file attribute info...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention proposes a Spark-based massive video semantic tagging method, which is mainly based on elastic distributed storage of massive videos in a Hadoop big data cluster environment, and uses Spark computing mode for video tagging. The method mainly includes the following contents: video segmentation method based on fractal theory and its implementation on Spark; video feature extraction method based on Spark and visual word formation method based on meta-learning strategy; generation method of video annotation based on Spark. Compared with traditional stand-alone computing, parallel computing or distributed computing, the present invention improves the computing speed by more than a hundred times, and has the advantages of complete labeling content information, low error rate, and the like.

Description

technical field [0001] The invention relates to a video processing method, in particular to a Spark-based massive video semantic tagging method. Background technique [0002] In recent years, with the popularity of multimedia applications and social networks, various multimedia data (text, images, and videos, etc.) have shown exponential growth; these large-scale data give traditional multimedia research, especially based on video applications and Research has brought new challenges and opportunities. How to effectively organize and use video data to drive and satisfy users' various personalized needs for video is becoming a research hotspot in the field of computer vision and multimedia. [0003] There is a big gap between the video understood by humans and the video expressed by the underlying visual features, that is, there is a "semantic gap" between video semantics and visual features. In order to achieve a query method described in natural language that is closer to t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F17/30G06T7/90G06K9/62
CPCG06F16/70G06F16/739G06V20/49G06V10/754
Inventor 崔铜葛军
Owner THE 28TH RES INST OF CHINA ELECTRONICS TECH GROUP CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products