Eye Blinking Feature Processing Using Convolutional Generative Adversarial Network for Deep Fake Video Detection
Funding: The authors received no specific funding for this work.
ABSTRACT
Deepfake video detection is one of the new technologies to detect Deepfakes from video or images. Deepfake videos are majorly used for illegal actions like spreading wrong information and videos online. Hence, deepfake video detection techniques are used to detect videos as real. Several deepfake detection methods have been introduced to detect Deepfakes from videos, but some techniques have limitations and low accuracy in predicting the video as real or fake. This paper introduces advanced deepfake detection techniques, such as converting the video into frames, pre-processing the frames, and using feature extraction and classification techniques. Pre-processing of frames using the sequential adaptive bilateral wiener filtering (SABiW) removes the noise from frames and detects the face using the 2D Haar discrete wavelet transform (2D-Haar). Then, the features are extracted from a pre-processed image with a depthwise separable residual network (DSRes). Finally, the video is classified using the Convolutional attention advanced generative adversarial network (Con-GAN) model as a deepfake video or original video. The Mud ring optimization algorithm is used to detect the weight coefficients of the network. Then, the overall performance of the proposed model is compared with other existing models to describe their superiority. The proposed method uses four datasets, which are FaceForensics++, Celeb DF v2, WildDeepfake, and DFDC. The performance of the proposed model provides a high accuracy rate of 98.91% and a precision of 98.32%. The proposed model provides better performance and efficient detection by detecting Deepfakes.
Conflicts of Interest
The authors declare no conflicts of interest.
Open Research
Data Availability Statement
Data sharing is not applicable to this article as no new data were created or analyzed in this study.