Real-Time deepfake detection: How Intel Labs uses AI to fight misinformation
Intel’s Ilke Demir explains how deepfake tech works and why AI researchers must collaborate with anthropologists, social scientists, and academic researchers.
Not long ago, creating deepfakes required significant computational resources. Now, synthetic media is common and often exploited for misinformation, hacking, and other malicious activities. Deepfakes are videos, speeches, or images where the performer is artificially generated. They employ intricate, deep learning structures, such as generative adversarial networks, variational autoencoders, and other AI models, to produce incredibly lifelike and convincing content. These models can fabricate synthetic personas, create lip-sync videos, and even execute text-to-image transformations, making it an uphill task to differentiate between genuine and counterfeit content.
Intel Labs has pioneered real-time deepfake detection technology to counter this escalating issue. In an interview with ZDNet, Ilke Demir, a senior research scientist at Intel, elaborated on the technology underpinning deepfakes, Intel's detection strategy, and the ethical implications surrounding the development and application of such technology.