Fake Face Detection Based On Videos Using Opencv And Neural Network Architecture
DOI:
https://doi.org/10.63665/ksm1cw26Keywords:
Deepfake Detection, MobileNetV2, OpenCV, Convolutional Neural Network (CNN), Face Detection, Image Classification, Facial Image Analysis, Deep Learning, Digital Image Forensics, Fake Face Detection, Computer Vision, Image Preprocessing, Media Verification, Artificial Intelligence, Social Media SecurityAbstract
The rapid development of the Internet has enabled the widespread distribution of manipulated facial images, particularly Deepfakes, which are increasingly difficult to detect using conventional methods. While current approaches focus on spatial domain features or complex network architectures, they often lack robustness against sophisticated forgery techniques. To address this, we propose a MobileNetV2-based Deepfake detection framework that leverages efficient convolutional feature extraction for accurate classification of real and fake facial images. The framework begins with OpenCV-based preprocessing, including face detection, alignment, and normalization, to ensure consistent input quality and enhance the discriminative features for detection. MobileNetV2, a lightweight yet powerful convolutional neural network, is employed to automatically learn hierarchical spatial features from the preprocessed facial images, eliminating the need for handcrafted features. By combining OpenCV preprocessing with MobileNetV2, the proposed system effectively captures subtle visual artifacts and texture inconsistencies introduced by Deepfake manipulation. This approach enables robust and scalable detection, generalizing well across diverse datasets and real-world scenarios, providing a practical solution for automated Deepfake detection in security, media verification, and social media monitoring applications.
Downloads
References
[1]. D. Afchar, V. Nozick, J. Yamagishi, and I. Echizen, “MesoNet: A Com-pact Facial Video Forgery Detection Network,” in Proc. IEEE Int. Work-shop on Information Forensics and Security (WIFS), 2018, pp. 1–7.
[2]. A. Rössler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and M. Nießner, “FaceForensics++: Learn-ing to Detect Manipulated Facial Images,” in Proc. IEEE/CVF Int. Conf. Computer Vision (ICCV), 2019, pp. 1–11.
[3]. Y. Qian, G. Yin, L. Sheng, Z. Chen, and J. Shao, “Thinking in Frequen-cy: Face Forgery Detection by Min-ing Frequency-Aware Clues,” in Proc. European Conf. Computer Vi-sion (ECCV), 2020, pp. 1–16.
[4]. Y. Luo, Y. Zhang, J. Yan, W. Liu, and D. Wang, “Generalizing Face Forgery Detection with High-Frequency Features,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), 2021, pp. 16317–16326.
[5]. X. Dong, J. Bao, D. Chen, N. Yu, and D. Chen, “Protecting Celebrities from Deepfake with Identity Con-sistency Transformer,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), 2022, pp. 9468–9478.
[6]. R. Tolosana, R. Vera-Rodriguez, J. Fierrez, et al., “Deepfakes and beyond: A survey of face manipulation and fake detection,” Information Fusion, vol. 64, pp. 131–148, 2020.
[7]. Y. Z. Li, M. C. Chang, and S. W. Lyu, “In ictu oculi: Exposing AI created fake videos by detecting eye blinking,” in Proc. IEEE Int. Workshop on Information Forensics and Security (WIFS), Hong Kong, China, pp. 1–7, 2018.
[8]. H. D. Li, W. Q. Luo, X. Q. Qiu, et al., “Identification of various image operations using residual-based features,” IEEE Trans. Circuits Syst. Video Technol., vol. 28, no. 1, pp. 31–45, 2018.
[9]. X. Wu, Z. Xie, Y. T. Gao, et al., “SSTNet: Detecting manipulated faces through spatial, steganalysis and temporal features,” in Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, pp. 2952–2956, 2020.
[10]. J. W. Fei, Y. S. Dai, P. P. Yu, et al., “Learning second order local anomaly for general face forgery detection,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), New Orleans, LA, USA, pp. 20238–20248, 2022.
[11]. J. A. Stuchi, M. A. Angeloni, R. F. Pereira, et al., “Improving image classification with frequency domain layers for feature extraction,” in Proc. IEEE Int. Workshop Mach. Learn. Signal Process. (MLSP), Tokyo, Japan, pp. 1–6, 2017.
[12]. J. Fridrich and J. Kodovsky, “Rich models for steganalysis of digital images,” IEEE Trans. Inf. Forensics Security, vol. 7, no. 3, pp. 868–882, 2012.
[13]. F. Matern, C. Riess, and M. Stamminger, “Exploiting visual artifacts to expose deepfakes and face manipulations,” in Proc. IEEE Winter Conf. Appl. Comput. Vis. Workshops (WACVW), Waikoloa, HI, USA, pp. 83–92, 2019.
[14]. A. Rössler, D. Cozzolino, L. Verdoliva, et al., “Faceforensics++: Learning to detect manipulated facial images,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Seoul, South Korea, pp. 1–11, 2019.
[15]. H. Dang, F. Liu, J. Stehouwer, et al., “On the detection of digital face manipulation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Seattle, WA, USA, pp. 5780–5789, 2020.
[16]. H. Q. Zhao, T. Y. Wei, W. B. Zhou, et al., “Multi-attentional deepfake detection,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Nashville, TN, USA, pp. 2185–2194, 2021.
[17]. L. Chen, Y. Zhang, Y. B. Song, et al., “Self-supervised learning of adversarial example: Towards good generalizations for deepfake detection,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), New Orleans, LA, USA, pp. 18689–18698, 2022.
[18]. X. Y. Dong, J. M. Bao, D. D. Chen, et al., “Protecting celebrities from deepfake with identity consistency transformer,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), New Orleans, LA, USA, pp. 9458–9468, 2022.
[19]. D. Cozzolino, G. Poggi, and L. Verdoliva, “Splicebuster: A new blind image splicing detector,” in Proc. IEEE Int. Workshop on Information Forensics and Security (WIFS), Rome, Italy, pp. 1–6, 2015.
[20]. Y. Y. Qian, G. J. Yin, L. Sheng, et al., “Thinking in frequency: Face forgery detection by mining frequency-aware clues,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Glasgow, UK, pp. 86–103, 2020.
[21]. [16] Y. C. Luo, Y. Zhang, J. C. Yan, et al., “Generalizing face forgery detection with high-frequency features,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Nashville, TN, USA, pp. 16312–16321, 2021.
[22]. A. Vaswani, N. Shazeer, N. Parmar, et al., “Attention is all you need,” in Advances in Neural Information Processing Systems (NeurIPS), Long Beach, CA, USA, pp. 6000–6010, 2017.
[23]. B. V. Salim, Chyntia, J. O. Indrawan, et al., “Face shape classification using swin transformer model,” Procedia Comput. Sci., vol. 227, pp. 557–562, 2023.
[24]. Y. Z. Li, X. Yang, P. Sun, et al., “Celeb-DF: A large-scale challenging dataset for DeepFake forensics,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Seattle, WA, USA, pp. 3204–3213, 2020.
[25]. B. Dolhansky, J. Bitton, B. Pflaum, et al., “The DeepFake detection challenge (DFDC) dataset,” arXiv preprint, arXiv:2006.07397, 2020.
[26]. B. J. Zi, M. H. Chang, J. J. Chen, et al., “WildDeepfake: A challenging real-world dataset for deepfake detection,” in Proc. ACM Int. Conf. Multimedia, Seattle, WA, USA, pp. 2382–2390, 2020.
[27]. M. X. Tan and Q. V. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” in Proc. Int. Conf. Mach. Learn. (ICML), Long Beach, CA, USA, pp. 6105–6114, 2019.
[28]. J. Deng, W. Dong, R. Socher, et al., “ImageNet: A large-scale hierarchical image database,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Miami, FL, USA, pp. 248–255, 2009.
[29]. K. Simonyan, A. Vedaldi, and A. Zisserman, “Visual explanations from deep networks via gradient-based localization,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Venice, Italy, pp. 618–626, 2017.
[30]. D. Afchar, V. Nozick, J. Yamagishi, et al., “MesoNet: A compact facial video forgery detection network,” in Proc. IEEE Int. Workshop on Information Forensics and Security (WIFS), Hong Kong, China, pp. 1–7, 2018.
[31]. Z. Z. Liu, X. J. Qi, and P. H. S. Torr, “Global texture enhancement for fake face detection in the wild,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Seattle, WA, USA, pp. 8057–8066, 2020.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Authors

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
