Expert: “Perfectly Real” Fake Videos — That Can Ruin Your Reputation — Just 6 Months Away

Imagine you’re a parent locked in a custody dispute, and a video emerges of you abusing your child; or that you’re a police officer, and you’re seen on video brutalizing a suspect; or that you’re a teacher “caught” on video beating a young student; or that a video goes public of your favorite politician engaging in serious sexual misconduct. Now imagine that the guilty party is actually the person who made the video — because it looks real, but isn’t. 

Welcome to the brave new world of “deepfake.”

It has been said that seeing is believing. But this may change, at least regarding online content, with “perfectly real” faked videos, which associate computer science professor Hao Li says are perhaps just months away.

As the International Business Times reports, “Morphed images and videos that appear ‘perfectly real’ in everyday life will be accessible to [average] people within six months or a year, computer graphics entrepreneur Hao Li has said. The revolutionary technique may bother the fact checkers but for animation films it may be a game changer soon.”

“‘In some ways, we already know how to do it, but it is only a matter of training with more data and implementation’ to make manipulated graphics appear real, the Taiwanese descent deepfake pioneer said,” the site also informs.

“The technology of ‘deepfake’ — the process to manipulate videos or digital representation using computers and machine-learning software to make them appear real, even though they are not — has given rise to concerns about how these creations could cause confusion and propagate misinformation, especially in the context of global politics,” the Times continues.

{modulepos inner_text_ad}

In fact, online “disinformation through targeted social-media campaigns and apps such as WhatsApp has already roiled elections around the world,” CNBC adds.

“‘It’s still very easy, you can tell from the naked eye most of the deepfakes,’ Li, an associate professor of computer science at the University of Southern California, said on ‘Power Lunch,’” CNBC also tells us.

“‘But there also are examples that are really, really convincing,’ Li said, adding those require ‘sufficient effort’ to create.”

Li had previously predicted, at a Massachusetts Institute of Technology conference just last week, that perfect deepfakes were just “two to three years” away. But “Li said recent developments, in particular the emergence of the wildly popular Chinese app Zao and the growing research focus, have led him to ‘recalibrate’ his timeline,” CNBC further reports.

“Zao is a face-swapping app that allows users to take a single photograph and insert themselves into popular TV shows and movies. It is among China’s most popular apps, although significant privacy concerns have arisen,” the site further relates.

“‘Soon, it’s going to get to the point where there is no way that we can actually detect [deepfakes] anymore, so we have to look at other types of solutions,’ Li said on ‘Power Lunch,’” CNBC continues.

Li said that the problem with deepfakes isn’t the technology’s existence, but that it can be used for evil as well as good. While it’s true that any tool — whether nukes or guns or robots or the Internet — can be used for good or ill and that it’s too often the latter given man’s fallen nature, the technology-specific problem wouldn’t exist if the technology didn’t. The point, however, is that since it will be developed by someone, all we can do is try to stay a step ahead of the miscreants.

Thus does Li say that academic research is imperative. “‘If you want to be able to detect deepfakes, you have to also see what the limits are,’ Li said,” CNBC also writes. “‘If you need to build A.I. frameworks that are capable of detecting things that are extremely real, those have to be trained using these types of technologies, so in some ways it’s impossible to detect those if you don’t know how they work.’”

This is why “Li is collaborating closely with Hany Farid of UC Berkeley, one of the world’s best digital forensics scientists,” MIT Technology Review (MTR) reports. “The collaboration reflects both competition — after all, Farid’s job is to detect the kind of deepfakes Li creates — and, in a bigger sense, close cooperation.”  

And hopefully it will yield fruit. After all, Li says, “One theory on why it [deepfake] hasn’t caused even more harm yet is simply because the quality isn’t there yet.”

“But it will get there,” he warns.

It certainly is already close. Consider the video below of actor and comedian Bill Hader on Conan O’Brien’s show doing amusing impressions of Al Pacino and Arnold Schwarzenegger. Deepfake tech has been used to morph his face into that of the given celebrity during the impressions.

Then there’s this “Top 10 Deepfake Videos” compilation created by WatchMojo.com:

Unfortunately, there’s nothing funny about this technology’s more malicious applications. For example, forging documents has long been a method used to damage political opponents. The Cold War Marxists were notorious for it. In more recent times, there were forged documents, regarding President George W. Bush’s service in the Texas National Guard, circulated in 2004 for the purposes of harming his reelection campaign.

In this vein, will we see convincing deepfake videos damaging to President Trump circulated shortly before the 2020 election?

If Professor Li’s prediction is correct and considering today’s political climate, it almost seems like a given.

Image: piranka via iStock / Getty Images Plus