Image splicing tampering positioning method based on full convolutional neural network

A convolutional neural network and image stitching technology, applied in the field of deep learning, can solve problems such as non-end-to-end structure, reduced model training speed, and difficulty in training deep and complex models of positioning methods.

Active Publication Date: 2019-11-05
NANJING UNIV OF INFORMATION SCI & TECH
View PDF4 Cites 20 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Liu et al. proposed to use fully convolutional neural network (FCN) instead of CNN, and combined with conditional random field (CRF) to fuse the positioning results of three FCNs. Although the accuracy has been improved, CRF and FCN are independent and are not an end-to-end structure.
Chen et al. changed the use of CRF, enhanced the learning of the target area, and made the entire network an end-to-end learning system. The positioning accuracy was improved, but

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image splicing tampering positioning method based on full convolutional neural network
  • Image splicing tampering positioning method based on full convolutional neural network
  • Image splicing tampering positioning method based on full convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0069] The technical solutions of the present invention will be described in detail below in conjunction with the accompanying drawings.

[0070] The present invention uses deep learning technology to improve the prediction accuracy of splicing and tampering positioning. In deep learning technology, the one that is more suitable for processing visual information is the deep convolutional neural network, which is a method of supervised learning; residuals are also designed in this embodiment. The module reduces the difficulty of model training and speeds up the convergence speed; finally, the post-processing based on the conditional random field is used to optimize the prediction effect.

[0071] A method of image mosaic tampering localization based on fully convolutional neural network, such as figure 1 shown, including the following steps:

[0072] Step 1: Establish a stitched tampered image library, including training images and test images;

[0073] Step 2: Initialize the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an image splicing tampering positioning method based on a full convolutional neural network. The method includes: establishing a splicing tampered image library; initializing an image splicing tampering positioning network based on a full convolutional neural network, and setting a training process of the network; initializing network parameters; reading a training image, performing training operation on the training image, and outputting a splicing positioning prediction result of the training image; calculating an error value between a training image splicing positioning prediction result and a real label, and adjusting network parameters until the error value meets a precision requirement; performing post-processing on the prediction result of which the precisionmeets the requirement by using a conditional random field, adjusting network parameters, and outputting a final prediction result of the training image; and reading the test image, predicting the test image by adopting the trained network, carrying out post-processing on a prediction result through a conditional random field, and outputting a final prediction result of the test image. The methodhas high splicing tampering positioning precision, the network training difficulty is low, and the network model is easy to converge.

Description

technical field [0001] The invention belongs to the technical field of deep learning, and in particular relates to an image splicing and tampering positioning method. Background technique [0002] With the development of science and technology, digital images are widely used in news, business, medical imaging, judicial criminal investigation and other fields. On the other hand, various image processing software represented by Photoshop, ACDSee, and Freehand can be used proficiently by more and more non-professionals, and the authenticity of digital images is facing severe challenges. Passive forensics in digital image forensics has more practical value and practical significance because it does not require any preprocessing. Among the various methods of tampering, splicing is the most commonly used method. Specifically, it refers to the operation of copying a part of an image from one image and then pasting it into another image, which will have a greater impact on the cont...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06N3/04G06N3/08G06T7/70
CPCG06N3/084G06T7/70G06N3/045
Inventor 陈北京吴韵清吴鹏高野
Owner NANJING UNIV OF INFORMATION SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products