Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Fast Segmentation Method for Moving Objects in Unrestricted Scenes Based on Fully Convolutional Networks

A fully convolutional network and moving target technology, applied in the field of fast segmentation of moving targets, to achieve accurate segmentation and improve analysis accuracy

Active Publication Date: 2019-05-14
KUNMING UNIV OF SCI & TECH
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The present invention provides a method for fast segmentation of moving objects in an unrestricted scene based on a fully convolutional network, which is used to solve the problem of accurately moving backgrounds, the motion and appearance of arbitrary objects, and non-rigid body deformation and joint motion videos. The difficult problem of moving target object segmentation provides a theoretical basis for efficient and accurate detection and segmentation of foreground target information in dynamic scenes, so as to efficiently and accurately obtain information about moving targets in videos, and improve the interpretation of video content and information acquisition

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Fast Segmentation Method for Moving Objects in Unrestricted Scenes Based on Fully Convolutional Networks
  • A Fast Segmentation Method for Moving Objects in Unrestricted Scenes Based on Fully Convolutional Networks
  • A Fast Segmentation Method for Moving Objects in Unrestricted Scenes Based on Fully Convolutional Networks

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0031] Embodiment 1: as Figure 1-4 As shown, a method for fast segmentation of moving objects in unrestricted scenes based on full convolutional networks. First, the video is divided into frames, and the ground truth set S of sample images is made using the results of the frame division; the PASCAL VOC standard library is adopted The trained fully convolutional neural network predicts the target in each frame of the video, obtains the deep feature estimator of the foreground target in the image, and obtains the inside-outside mapping information of the target in all frames, and realizes the foreground and background in the video frame. Preliminary prediction; then, the deep feature estimators of the foreground and background are refined by Markov random field, so as to realize the segmentation of video foreground moving objects in unrestricted scene videos and verify the performance of this method through the Ground Truth set S.

[0032] The concrete steps of described method...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a method of quickly segmenting a moving target in a non-restrictive scene based on a full convolution network, which belongs to the technical field of video object segmentation. The method comprises steps: firstly, framing is carried out on the video, and a result after framing is used for making a Ground Truth set S for a sample image; a full convolution neural network trained through a PASCAL VOC standard library is adopted to predict a target in each frame of the video, a deep feature estimator for an image foreground target is acquired, target maximum intra-class likelihood mapping information in all frames is obtained hereby, and initial prediction on the foreground and the background in the video frames is realized; and then, through a Markov random field, deep feature estimators for the foreground and the background are refined, and thus, segmentation on the video foreground moving target in the non-restrictive scene video can be realized. The information of the moving target can be effectively acquired, high-efficiency and accurate segmentation on the moving target can be realized, and the analysis precision of the video foreground-background information is improved.

Description

technical field [0001] The invention relates to a fast segmentation method of a moving target in an unrestricted scene based on a full convolutional network, and belongs to the technical field of video object segmentation. Background technique [0002] With the development of multimedia technology, video content provides us with rich and comprehensive information. However, the original video often contains a huge amount of information, and most of the information is meaningless for specific industry applications. Therefore, how to get from Extracting meaningful information from video to serve people's life and work has become an important issue closely related to practical applications, for example: using video object segmentation technology to extract moving target information in traffic surveillance video. [0003] At present, the methods and products for video object segmentation using image processing already have relatively mature products and patents in China. For exa...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/215
CPCG06T2207/10016G06T2207/20081G06T2207/20084
Inventor 张印辉何自芬张春全伍星张云生王森姜守帅
Owner KUNMING UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products