Method for segmenting and indexing scenes by combining captions and video image information

A video image and scene segmentation technology, applied in the field of video indexing and search, can solve the problems of time-consuming and laborious work, unsatisfactory recall rate and precision rate, unobjective labeling results, etc., achieve high accuracy and avoid artificial The effect of labeling

Inactive Publication Date: 2010-06-02
INST OF ACOUSTICS CHINESE ACAD OF SCI
View PDF0 Cites 81 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to overcome the unsatisfactory recall rate and precision rate of video scene extraction in the prior art, and the need to artificially mark the extracted video scene segm...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for segmenting and indexing scenes by combining captions and video image information
  • Method for segmenting and indexing scenes by combining captions and video image information
  • Method for segmenting and indexing scenes by combining captions and video image information

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0045] When extracting and indexing scene segments of movie-like videos, the present invention uses both information of movie video images and subtitles to achieve a higher-precision video scene segment extraction effect, and can automatically match and match the extracted scene video segments. The keywords contained in the subtitles are used as its index, so as to avoid manual labeling. Subtitles are generally the dialogue of characters in a movie, and it has three attributes, namely, its appearance moment, disappearance moment and subtitle text in the movie. At present, for high-definition DVD movies, the subtitles are generally released together with the video files in the form of plug-in files, which are easy to obtain; for embedded subtitles (subtitle text superimposed on the video image), the subtitles can be extracted through video OCR technology. Each subtitle includes the appearance and disappearance time of the subtitle in the video, and the present invention extract...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a method for segmenting and indexing scenes by combining captions and video image information. The method is characterized in that: in the duration of each piece of caption, a video frame collection is used as a minimum unit of a scene cluster. The method comprises the steps of: after obtaining the minimum unit of the scene cluster, and extracting at least three or more discontinuous video frames to form a video key frame collection of the piece of caption; comparing the similarities of the key frames of a plurality of adjacent minimum units by using a bidirectional SIFT key point matching method and establishing an initial attribution relationship between the captions and the scenes by combining a caption related transition diagram; for the continuous minimum cluster units judged to be dissimilar, further judging whether the minimum cluster units can be merged by the relationship of the minimum cluster units and the corresponding captions; and according to the determined attribution relationships of the captions and the scenes, extracting the video scenes. For the segments of the extracted video scenes, the forward and reverse indexes, generated by the caption texts contained in the segments, are used as a foundation of indexing the video segments.

Description

technical field [0001] The present invention relates to the technical field of video indexing and searching, in particular, the present invention relates to a method for combining subtitles and video image information for scene segmentation and indexing. Background technique [0002] The advancement of mass storage device manufacturing technology, the improvement of network data transmission rate, and the continuous improvement of high-efficiency video compression technology have enabled the widespread dissemination and use of digital video, improving people's entertainment and cultural life. Finding interesting video clips in massive video databases has become a new problem. Videos can be organized into tree hierarchies containing scenes, shots, and frames. A frame is an image, which is the most basic physical unit in a video. A shot is a sequence of frames captured continuously by the same camera, which is the physical boundary of a sequence of video images. A scene is c...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F17/30H04N5/262G11B27/10
Inventor 王劲林李松斌王玲芳
Owner INST OF ACOUSTICS CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products