Intel's 'FakeCatcher' can detect deepfakes with 96% accuracy

Intel’s ‘FakeCatcher’ can detect deepfakes with 96% accuracy

The FakeCatcher deepfake detector works by analyzing the “blood flow” in video pixels to determine the authenticity of a video within milliseconds (Image: Intel)

Intel has developed a technology capable of distinguishing between real videos and wrong wrong in real time thanks to skin analysis.

Its new technology, FakeCatcher, can detect fake videos with a 96% accuracy rate and is the “world’s first real-time deepfake detector” to return results in milliseconds.

“Deepfake videos are everywhere now. You’ve probably seen them before; videos of celebrities doing or saying things they’ve never actually done,” said Intel Labs principal researcher Ilke Demir.

The FakeCatcher deepfake detector works by analyzing the “blood flow” in video pixels to determine the authenticity of a video within milliseconds.

Most deep learning-based detectors look at raw data to try to find signs of inauthenticity and identify what’s wrong with a video. In contrast, FakeCatcher looks for authentic clues in real videos, evaluating what makes us human, such as the “blood flow” in the pixels of a video.

To view this video, please enable JavaScript and consider upgrading to a web browser that
supports HTML5 video

When our heart pumps blood, our veins change color. These blood flow signals are collected from across the face and algorithms translate these signals into maps.

“Then, through deep learning, we can instantly detect whether a video is real or fake,” Intel said.

According to the company, up to 72 streams can be analyzed simultaneously using one of its 3rd generation Xeon processors. However, these processors are a little more robust than the processors in our laptops and desktops, and can cost up to around £4,000.

Deepfake videos are a growing threat, costing companies up to $188 billion in cybersecurity solutions, according to Gartner.

It is also difficult to detect these deepfake videos in real time because detection apps require uploading videos for analysis and then waiting hours for results.


What are deepfakes?

Deepfakes are videos and images that use deep learning AI to forge something that doesn’t actually exist. They are best known for being used in porn videos, fake news, and pranks.

Misinformation can be used to make events appear real that never happened, put people in certain situations they never found themselves in, or be used to depict people saying things they never said. .

Mainly, deepfakes can be responsible for the loss of trust in the media.

In April, Ukraine had accused Russia prepare for the launch of a President Volodymr Zelensky’s ‘deepfake’ surrender.

FakeCatcher can help restore trust by allowing users to distinguish between real and fake content.

Ukraine had accused Russia of preparing to launch a ‘deepfake’ of President Volodymr Zelensky’s surrender (Photo: Twitter https://twitter.com/IntelNessa/status/1504217524883365888)

Social media platforms could leverage the technology to prevent users from uploading harmful deepfake videos.

The technology can also be used by news agencies to avoid inadvertently amplifying manipulated videos. Nonprofits could use the platform to democratize deepfake detection for everyone.

AFTER : Deepfake porn is destroying lives – but, as one woman discovered, it only takes 8 seconds to create an image

AFTER : Turns Out Bruce Willis Didn’t Sell His Digital Rights to Deepfake After All

Leave a Comment

Your email address will not be published.