Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Extraction method of semantic information of video images

A technology of semantic information and extraction method, which is applied in the field of video description and video annotation, and can solve problems such as information forgetting, model performance degradation, and context vectors not being able to contain all global information.

Active Publication Date: 2017-11-24
TSINGHUA UNIV
View PDF4 Cites 34 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Embodiments of the present invention provide a method and device for extracting semantic information of video images to solve the problem that in the prior art, when the input video takes a long time and the number of extracted frames is large, the context vector generated by the prior art model through encoding cannot Contains all the global information, especially the information in the picture of the input frame at an earlier time may be forgotten, leading to the problem of model performance degradation

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Extraction method of semantic information of video images
  • Extraction method of semantic information of video images
  • Extraction method of semantic information of video images

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0112] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0113] like figure 1 As shown, the embodiment of the present invention provides a method for extracting semantic information of video images, including:

[0114] Step 101. Obtain a video training set and a video verification set from a preset video annotation dataset.

[0115] Step 102, extract video frame images from the videos in the video training set at a preset frame interval, and generate a plurality of video frame sequences.

[0116] Step 103, process...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an extraction method of semantic information of video images, and relates to the technical field of video description and annotation. Firstly, frame sequences of a video are extracted according to a certain inter-frame space, a feature vector of each frame image is extracted according to a convolutional neural network, the feature vectors are regarded as input of a LSTM network encoder, output of each time step of the LSTM network encoder and output of the previous time step of a LSTM network decoder are regarded as input of an external storage EMM, and contents of a stored matrix in the external storage EMM are updated; the external storage EMM outputs two reading vectors which are regarded as input vectors of decoding and encoding of the subsequent time step respectively. Through two LSTM network dynamics, reading and writing of the external storage EMM are controlled, and storing of the feature vector of each frame image of the video at the encoding phase is achieved; at the decoding phase, through forecasting of feedback of words, the output of the subsequent time step of the external storage is adjusted, so that when an annotation of the video is generated, the feature vectors of a context are adjusted according to a generated word sequence.

Description

technical field [0001] The present invention relates to the technical field of video description and video labeling, in particular to a method and device for extracting semantic information of video images. Background technique [0002] Currently, with the development of the Internet, digital devices, and multimedia technologies, video has attracted more attention from multimedia users because video is more vivid than text and pictures. At present, the rapid development of short video applications such as WeChat and Kuaishou and various online live broadcast platforms makes video play an increasingly important role in people's lives. In order to facilitate people to better understand the content of video images, it is extremely important to describe and label video images to obtain semantic information of video images. Video description is to use natural language to describe the characteristics of video content according to the specific content of a video. Using an applicat...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F17/30G06K9/62
CPCG06F16/783G06F16/7867G06F18/214
Inventor 尹首一杨建勋欧阳鹏刘雷波魏少军
Owner TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products