Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video moving object tampering evidence obtaining method based on VGG-11 convolutional neural network

A convolutional neural network, VGG-11 technology, applied in the field of video tampering detection, can solve problems that do not involve deep learning, and feature learning is not applicable to tampered objects

Active Publication Date: 2019-11-15
广东外语外贸大学南国商学院
View PDF9 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] However, the above-mentioned forensics algorithms based on target object video tampering are mostly based on traditional image processing and classifier methods, and do not involve deep learning methods. The reason is that there are many objects in the video frame, and the tampered objects are not suitable for exploiting The deep learning network directly performs feature learning, so there is no research on intra-frame video forensics combined with deep learning methods

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video moving object tampering evidence obtaining method based on VGG-11 convolutional neural network
  • Video moving object tampering evidence obtaining method based on VGG-11 convolutional neural network
  • Video moving object tampering evidence obtaining method based on VGG-11 convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0024] The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Apparently, the described embodiments are only some of the embodiments of the present invention, not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts all belong to the protection scope of the present invention.

[0025] See figure 1 , a video moving object tampering forensics method based on VGG-11 convolutional neural network, including steps

[0026] S1: Calculate the motion residual between the forged frame and the unforged frame in the video by aggregation operation, and classify the forged frame and the unforged frame;

[0027] S2: Based on the motion residual, extract motion residual map features;

[0028] S3: Construct a convolutional neural n...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video moving object tampering evidence obtaining method based on a VGG-11 convolutional neural network, and the method comprises the steps: calculating a motion residual error between a forged frame and an unforged frame in a video through employing aggregation operation, and classifying the forged frame and the unforged frame; based on the motion residual error, extracting a motion residual error graph feature; constructing a convolutional neural network based on the VGG-11; training the convolutional neural network based on the VGG-11 by using the motion residual image features; and judging whether the video moving object is tampered or not by using the convolutional neural network based on VGG-11. Compared with the prior art, the method has the advantage that the forged frame in the tampered video can be better and automatically recognized.

Description

technical field [0001] The present invention relates to video tampering detection technology, in particular to a video moving object tampering evidence collection method based on VGG-11 convolutional neural network. Background technique [0002] In today's Internet era, with the continuous development of computer multimedia technology, more and more images, audio and video have become network resources shared by netizens. In particular, digital video has become the main information bearing form of the network because of its intuition, convenience, and rich information content, and has also become an important data source for many social networking software. If necessary, these video files will serve as evidence for many important matters in the fields of journalism, politics, insurance claims, defense and legal trials. However, the widespread use of powerful multimedia editing tools such as Adobe Photoshop, Adobe Premiere, Lightworks, Video EditMagic and Cinelerra makes it ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04
CPCG06V20/42G06V20/46G06N3/045G06F18/241
Inventor 甘艳芬钟君柳杨继翔赖文达
Owner 广东外语外贸大学南国商学院
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products