Video semantic scene segmentation method based on convolutional neural network

A convolutional neural network and semantic scene technology, applied in the field of video scene segmentation, can solve problems such as poor scene classification effect, failure to consider the timing of video scene segmentation, and information loss, so as to avoid information loss and improve the accuracy of segmentation , the effect of ensuring completeness

Inactive Publication Date: 2018-01-16
HUAZHONG UNIV OF SCI & TECH
View PDF7 Cites 24 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] Aiming at the above defects or improvement needs of the prior art, the present invention provides a video semantic scene segmentation method based on a convolutional neural network, thereby solving the problem that the existing video scene segment

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video semantic scene segmentation method based on convolutional neural network
  • Video semantic scene segmentation method based on convolutional neural network
  • Video semantic scene segmentation method based on convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not constitute a conflict with each other.

[0033] Aiming at the shortcomings of existing video scene segmentation methods, such as scene semantic classification errors caused by feature extraction or fusion processes, and clustering algorithms that do not take into account the timing characteristics of scenes, the present invention provides a video segmentation method based on convolutional neural networks. The semantic scene segmentatio...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video semantic scene segmentation method based on a convolutional neural network, which is mainly divided into two parts, wherein one part is that a convolutional neural network is built on the basis of shot segmentation, and then semantic feature vectors of video key frames are obtained by using the built convolutional neural network; and the other part is that the Bhattacharyya distance between the semantic feature vectors of two shot key frames is calculated by using the time continuity of the front and back key frames according to the semantic feature vectors, andthe semantic similarity of the shot key frames is obtained through measuring the Bhattacharyya distance. Probability estimation values of different semantics are outputted by using the convolutionalneural network to act as a semantic feature vector of the frame. Considering a time sequence problem of scene partition in the continuous time, the shot similarity is compared by combining semantic features of the two shot key frames and the time sequence feature distance between the shots, and thus a final scene segmentation result is obtained. The method disclosed by the invention has certain universality and has a good scene segmentation effect under the condition that training sets are sufficient.

Description

technical field [0001] The invention belongs to the technical field of video scene segmentation in image processing and machine vision, and more specifically relates to a video semantic scene segmentation method based on a convolutional neural network. Background technique [0002] Usually, people do not understand video content from the level of video shots, but are more accustomed to understanding video content from the level of scenes. This is because a shot is only a component unit in the video structure and cannot fully express semantic information, which is likely to cause information lost. However, a scene is a collection of shots and contains a lot of semantic information, so it is more in line with people's understanding habits, and it also makes the research on video scene segmentation more realistic. [0003] Combining a series of shots with related content to describe events or activities with certain semantic information in the video is called shot clustering. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62
Inventor 韩守东黄飘朱梓榕陈阳
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products