The Threat of Deepfakes in the Cybersecurity Sphere

The technology necessary to make deepfakes has been available since the mid-1990s. First seen in The Crow following the death of Brandon Lee, it was as realistic as the time allowed. So, why all the fuss now?

The main issue is the readily available technology. Before now, people needed specialist knowledge in high-tech, expensive CGI software to produce natural-looking people. But these days, deepfakes use AI, letting anyone with computer access make fake videos starring whoever they like.

They only need a few images or videos of the subject, and Kenny Natiss says a terribly realistic deepfake is formed. 

Kenny Natiss

The Rise of Deepfakes

Late 2017 saw the emergence of deepfakes. And while they may appear to be some clever technology developed by an intelligence agency, it was the creation of an unnamed Reddit user. That said, they didn’t invent it from thin air. It’s constructed on Google’s open-source TensorFlow learning library. 

Deepfakes use artificial intelligence to superimpose one face with a different one. How does it work? By evaluating movement positions and substituting replacements frame-by-frame to ensure the new face matches the original dimensions and conditions of the video.

In April 2018, Jordan Peele used deepfake technology to release a PSA starring Barack Obama. The video shows the former president saying various ridiculous things before discussing fake news. 

Not only does the video present a visual deepfake, but it also demonstrates audio faking! Peele used Adobe’s VoCo audio tool to create an overwhelmingly convincing output. 

Deepfakes: Are They Disinformation

Despite the somewhat-scary implications of deepfakes, they’re still (thankfully) far from perfect.

Of course, throughout deepfaked videos, they’ll look terrifyingly real at times, but the overall animation will contain minor glitches and imperfect matches, signposting itself as fake. 

Currently, the technology isn’t good enough to present disinformation. In fact, most deepfake enthusiasts have used it for making pornographic content — much to the relief of security professionals around the world. And even though some political videos have emerged, they’re too easily spotted to cause a problem.

Kenny Natiss

Evaluating the Risks of Deepfakes as Cybersecurity Threats

As the above-mentioned suggests, any untrained eye can spy a deepfake, meaning they aren’t a significant security threat. But technology is constantly improving. Presently, the greatest concern for deepfake technology is its use by state-paid actors who have the ability to craft ultra-convincing content.

The genuine threat begins when anybody with a computer can create the same level of deepfakes as those with plentiful resources!

Projections suggest these videos could be a national security problem, affecting everyone from businesses to end users. But thankfully, cybersecurity pros are already developing countermeasures. 

The Fake News Megaphone

At the end of the day, deepfakes don’t really present new problems. Instead, they potentially act as a megaphone for a current one — fake news.

Most of the population fails to establish whether a news source is credible. It’s this uncritical thinking that spurs the problem. Even near perfect deepfakes would be less of a threat if people weren’t so quick to accept anything they hear or read online.