Deep Image Forgery: An Investigation on Forensic and Anti-forensic Techniques
- Publication Type:
- Thesis
- Issue Date:
- 2023
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
Deep image forgeries powered by deep learning models, e.g., deepfakes, are increasingly challenging the belief that seeing is believing. The image privacy and security threats raised by deep image forgery, such as misleading information on social media, have become a major concern in the security community. Effective countermeasures are impelling. A common countermeasure is developing detection systems to distinguish fake images from real ones. Despite a series of forensic detectors having been proposed, there are still several open challenges, such as the cross-domain generalization ability and the robustness against attacks. Also, the countermeasures should be constantly updated given the continuous technical advances behind deep image forgery. These challenges can be further understood and facilitated from two rival technical perspectives: forensics and anti-forensics. The forensic direction aims to develop more robust and generalized detection systems that can deal with forgeries in complex or unknown environments. The anti-forensic direction aims to reveal the vulnerability and weakness of a detection system by designing possible attacks to enable forged images to bypass the detection.
In this thesis, we study the deep image forgery detection problem with a focus on resolving the open challenges newly emerging in this field. We investigate the problem from both forensic and anti-forensic perspectives to provide comprehensive solutions. Regarding the forensic direction, we have proposed two forgery detection methods: one exploits multi-level GAN model fingerprinting to enable task-specific forensics, and the other uses a multi-view reconstruction-classification learning framework for generalized and robust detection. Regarding the anti-forensic direction, we have designed a novel black-box attack specific to deep image forgery detection systems, called the trace removal attack. In addition, we have provided a closer look at the generalization and robustness issues of deep image forgery detection from a frequency perspective, which link the forensic and anti-forensic research with a novel frequency alignment method benefiting both directions. For each proposed method, we have conducted extensive experimental evaluations where multiple datasets and security scenarios are involved. We also compare the methods with state-of-the-art baselines to demonstrate their superiority.
Please use this identifier to cite or link to this item: