Video object removal tampering time-space domain positioning method based on deep learning

A video object, time domain localization technology, used in neural learning methods, digital video signal modification, instruments, etc.

Active Publication Date: 2021-02-05
HANGZHOU DIANZI UNIV
View PDF2 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the research on passive forensics of digital video is still in its infancy, and there is still a lot of room for exploration and improvement.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video object removal tampering time-space domain positioning method based on deep learning
  • Video object removal tampering time-space domain positioning method based on deep learning
  • Video object removal tampering time-space domain positioning method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0052] In order to better understand the technical solutions of the present invention, the embodiments of the present invention will be described in detail below in conjunction with the accompanying drawings. It should be clear that the described embodiments and all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of the present invention.

[0053] Embodiments of the present invention provide a deep learning-based video object removal and tampering time-space domain positioning method, such as figure 1 As shown, the method includes the following steps:

[0054] In step 101, the data set is randomly divided into video sequences for making a training set, a verification set and a test set, wherein the ratio of the number of videos used for making the training set, verification set and test set is 6:2:2.

[0055] Step 102, make training set and verification set according to the input requirements of time...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the technical field of multimedia information security, and particularly relates to a video object removal tampering time-space domain positioning method based on deep learning, which comprises the following steps: S1, training a time domain positioning model and a space domain positioning model; S2, inputting a to-be-tested video into the time domain positioning model toobtain a tampered frame sequence; and S3, inputting the tampered frame sequence into the spatial domain positioning model to obtain a tampered area positioning result in the tampered frame. Accordingto the method, the tampered frames can be detected from the tampered video, and the tampered area can be positioned in each tampered frame.

Description

technical field [0001] The invention belongs to the technical field of multimedia information security, and in particular relates to a time-space domain positioning method for video object removal and tampering based on deep learning. Background technique [0002] In recent years, video surveillance has been seen everywhere as public and even private security equipment, but with the development of digital video and image processing technology, it has brought great challenges to the integrity and authenticity of video content. Once these videos are manipulated by criminals, it will have a huge impact on public security and judicial evidence collection. Usually, after these videos have been highly tampered with, people cannot tell the real from the fake with the naked eye. Therefore, how to ensure the authenticity and integrity of the video through the computer is very important. [0003] Digital video is composed of visual objects with a certain spatial structure and semant...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H04N19/89H04N13/282G06K9/00G06K9/32G06K9/62G06N3/04G06N3/08
CPCH04N19/89H04N13/282G06N3/08G06V20/41G06V10/25G06N3/045G06F18/23213
Inventor 姚晔杨全鑫张竹溪张祯袁理锋陈临强
Owner HANGZHOU DIANZI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products