Target detection method based on self-supervised generative adversarial learning background modeling

A technology of target detection and background modeling, applied in neural learning methods, inference methods, biological neural network models, etc., can solve the problems of brightness camera moving target detection algorithm difficulty, brightness changing background dynamic camera movement, difficulty, etc. Reduce dependence, widely use scenarios and practical application value, improve the effect of the effect

Pending Publication Date: 2021-11-26
TIANJIN UNIV OF SCI & TECH
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

But the current foreground object detection technology still faces great challenges
Changes in brightness, dynamic background, camera movement, etc. all bring great difficulties to the target detection algorithm.
For example, the current convolutional neural network-based method that can achieve the best results on videos captured by static cameras has great difficulties when applied to videos captured by moving cameras.
[0004] The present invention proposes a method for background modeling based on self-supervised generative confrontation learning to solve the difficulties of brightness changes, background dynamics and camera movement in foreground target detection.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Target detection method based on self-supervised generative adversarial learning background modeling

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0022] The embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.

[0023] A target detection method based on self-supervised generative adversarial learning background modeling, including the following steps:

[0024] Step 1. Use the camera in the environment to collect images, and label the foreground targets to obtain corresponding labels, and construct a dataset S.

[0025] Each sample consists of an image and a corresponding foreground target mask label. The foreground target mask label is a binary image of the same size as the original image. The pixels of the foreground target are marked as 1 and the background pixels are marked as 0.

[0026] Step 2. Select a data subset S that only contains background information images from the data set S b .

[0027] Step 3. Construct a generative adversarial network consisting of a generative network G and a discriminant network D, and use the data set S using s...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

In most foreground target detection scenes, the background has a relatively fixed mode. The change of brightness, the dynamic background, the movement of a camera and the like bring a lot of difficulties to a target detection algorithm. On the basis, the invention provides a target detection method based on self-supervised generative adversarial learning background modeling, and the method can effectively overcome the difficulty and can be applied to a real environment. The method comprises the following steps: firstly, forming training data by adopting a self-supervision method of automatic image completion, and carrying out self-supervision adversarial learning by using a generative adversarial neural network to construct a background reconstruction model; secondly, training a convolutional neural network, performing foreground target detection by using the difference between an original image and an image reconstructed by a background model. The method provided by the invention can be applied to contents shot by a static camera and a mobile camera, and has wide application scenes and practical application value.

Description

technical field [0001] The invention belongs to the technical field of digital image processing, in particular to a method for realizing foreground target detection in images or videos. Background technique [0002] The detection of foreground targets in images is the basis for target recognition and tracking, and is widely used in many fields such as video surveillance, home monitoring, and field environmental monitoring. There are generally three types of methods for foreground target detection, namely frame difference method, background subtraction and direct target detection. The basic idea of ​​the background subtraction method is to first establish a background model, and then subtract the background image from the current image to obtain the foreground target. [0003] After decades of development, especially with the development of deep learning since 2012, foreground object detection technology has made great progress. Compared with traditional unsupervised method...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/00G06N3/04G06N3/08G06N5/04
CPCG06T7/0002G06N3/08G06N5/04G06T2207/10004G06T2207/20081G06N3/045
Inventor 任德华赵婷婷柳映辉陈亚瑞吴超张容
Owner TIANJIN UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products