Intel has developed a technology capable of distinguishing between real videos and wrong wrong in real time thanks to skin analysis.
Its new technology, FakeCatcher, can detect fake videos with a 96% accuracy rate and is the “world’s first real-time deepfake detector” to return results in milliseconds.
“Deepfake videos are everywhere now. You’ve probably seen them before; videos of celebrities doing or saying things they’ve never actually done,” said Intel Labs principal researcher Ilke Demir.
The FakeCatcher deepfake detector works by analyzing the “blood flow” in video pixels to determine the authenticity of a video within milliseconds.
Most deep learning-based detectors look at raw data to try to find signs of inauthenticity and identify what’s wrong with a video. In contrast, FakeCatcher looks for authentic clues in real videos, evaluating what makes us human, such as the “blood flow” in the pixels of a video.
When our heart pumps blood, our veins change color. These blood flow signals are collected from across the face and algorithms translate these signals into maps.
“Then, through deep learning, we can instantly detect whether a video is real or fake,” Intel said.
According to the company, up to 72 streams can be analyzed simultaneously using one of its 3rd generation Xeon processors. However, these processors are a little more robust than the processors in our laptops and desktops, and can cost up to around £4,000.
Deepfake videos are a growing threat, costing companies up to $188 billion in cybersecurity solutions, according to Gartner.
It is also difficult to detect these deepfake videos in real time because detection apps require uploading videos for analysis and then waiting hours for results.
What are deepfakes?
Deepfakes are videos and images that use deep learning AI to forge something that doesn’t actually exist. They are best known for being used in porn videos, fake news, and pranks.
Misinformation can be used to make events appear real that never happened, put people in certain situations they never found themselves in, or be used to depict people saying things they never said. .
Mainly, deepfakes can be responsible for the loss of trust in the media.
FakeCatcher can help restore trust by allowing users to distinguish between real and fake content.
Social media platforms could leverage the technology to prevent users from uploading harmful deepfake videos.