Check patentability & draft patents in minutes with Patsnap Eureka AI!

A video automatic labeling method and automatic labeling system based on motion matching

A motion matching and automatic labeling technology, applied in the computer field, can solve the problem of no automatic semantic description of video content, etc., and achieve the effect of reliable performance and low algorithm complexity

Active Publication Date: 2017-02-15
WUXI YUNQUE TECH CO LTD
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In real life, it is often encountered that it is necessary to mark real-time shooting videos, such as real-time monitoring of specific targets while shooting specific targets in the video, and real-time shooting of tourist tours and commemorative videos in large playgrounds and scenic spots Annotate for tourists to retrieve and extract. However, most of the existing video annotations are based on training, matching and labeling of existing videos. At present, there is no method for automatic semantic description of video content by using the target motion state.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A video automatic labeling method and automatic labeling system based on motion matching
  • A video automatic labeling method and automatic labeling system based on motion matching
  • A video automatic labeling method and automatic labeling system based on motion matching

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] See figure 1 and figure 2 , an automatic video labeling method based on motion matching, collects the target motion video of the moving target, and simultaneously collects the sensor data of the motion sensor module attached to the motion of the moving target. After the target motion video is image preprocessed, each The foreground image block of the frame image, and then the method of optical flow estimation is used to extract the target video motion data from the foreground image block in each frame image, and then the video motion feature is extracted from the target video motion data, and the sensor motion data is processed. The sensor motion feature is extracted, and then the video motion feature and the sensor motion feature are subjected to feature matching analysis, and finally the target motion video is video marked according to the feature matching analysis result. figure 1 Among them, creamer is a device used to collect moving video of moving objects, and A...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an automatic video labeling method based on motion matching. The method can automatically synchronize and label videos in real time, wherein the videos are shot in real time. The method is characterized by comprising the steps of collecting a target motion video of a motion target, collecting sensor data of an accessory motion sensor module when the motion target moves, extracting video motion features of the target motion video and sensor motion features of the sensor motion data, then carrying out feature matching analyzing on the video motion features and the sensor motion features, and finally labeling the target motion video according to the feature matching analysis result. An automatic video labeling method based on motion matching comprises the motion sensor module, a video collection module and a motion matching module, wherein the motion sensor module is used for collecting the sensor motion data of the motion target; the motion matching module and the motion sensor module are in data communication through a wireless communication module; the motion matching module and the video collection module are in data communication through a wireless communication module.

Description

technical field [0001] The invention relates to the field of computer technology, in particular to a video automatic labeling method and automatic labeling system based on motion matching. Background technique [0002] Video annotation is the description of video semantic content, which is the premise of video retrieval. In real life, it is often encountered that it is necessary to mark real-time shooting videos, such as real-time monitoring of specific targets while shooting specific targets in the video, and real-time shooting of tourist tours and commemorative videos in large playgrounds and scenic spots Annotation is carried out for tourists to retrieve and extract. However, most of the existing video annotations are based on training, matching and labeling of existing videos. At present, there is no automatic semantic description method for video content by using the target motion state. Contents of the invention [0003] In view of the above problems, the present in...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/00H04N7/18
Inventor 段丁博马德新马建
Owner WUXI YUNQUE TECH CO LTD
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More