16 sep 2019
Humans have had to deal with deception as long as we have been around. But individuals and cultures adapt to deal with potential deception.
Exclusive
6
image
default
white
60
How Much Should We Worry about Deep Fakes?
By Taylor Barkley
|


If you spend any sleepless nights worrying about deep fakes, give yourself a break. It’s understandable for people and experts to be concerned about this technology, but the reality gives us reason to be hopeful and certainly less dire than current predictions. The dire warnings and reactions seem to rest on dubious assumptions about cultural adaptation.

 

Humans have had to deal with deception as long as we have been around. But individuals and cultures adapt to deal with potential deception. Likewise, along with the development of communication techniques and media, there have been cultural adaptations to deal with fakery. History and cultural complexity are on the side of winning out in a world of deep fakes.

 

Before we wring our hands into a knot over this new deep fake technology, think for a moment how past technologies were themselves susceptible to counterfeiting. Take writing and photography. Jeffrey Westling at Techdirt does an excellent job walking through some of the history of technological deception and cultural adaptation.

 

Society has largely been able to inoculate itself against rampant bamboozlement via a complex web of cultural adaptations.

 

The assumption apparently core to the most dire predictions about deep fakes is that video is accepted as truth and will continue to be accepted as truth. This assumption is dubious. If past adaptations are any indication, it will not remain the case.

 

For most of human history video recordings have not existed (surprise!) yet somehow truth was made out. Video is a relatively new technology and perhaps its revered status as the arbiter of truth will be short lived. As a lead deep-fake developer noted for MIT Technology Review about the advent of convincing, undetectable deep fakes, “When that point comes, we need to be aware that not every video we see is true.”

 

The same goes for audio recordings. When once an audio recording was difficult to produce, it now is no longer the case. In a recent BBC article, cybersecurity firm Symantec cites three cases in Britain where senior financial controllers have been duped by fake audio of their CEO into transferring cash. Corporations and others will develop ways to combat this fraud as they have done for all past tactics.

 

At worst, all video and audio recordings won’t be taken at face value. But what’s more likely to happen is that we will all have to be a bit more judicious about what we take to be true.

 

Will a deep fake cause some people to think something is true which really is not true? Yes. Might it be possible that will occur at just the right time and be just convincing enough to alter the course of human history? Yes. There are non-zero chances these things can happen. The fears, however, seem to assume a rigid society and culture. Culture clearly changes, reacts and evolves. As humanity has done with every new technology, norms change, we adapt.

 

An additional reason not to fear the deep fake is the sheer amount of coordination that would have to happen for something to be convincing enough and timed well enough to beat any kind of verification process or anti-spoofing technology. The typical nightmare scenario is that a convincing deep fake will be released the evening before an election day, swing the vote against the leader, thus “undermining democracy.”

 

But that assumes just one deep fake, which would take some coordination on the part of the bad actors. What if 11 or 10,011 deep fake videos of a sitting president were released, all saying outlandish and contradictory things? That would nullify any intended effect. Instead of being fooled, we might just be annoyed and treat those videos like we treat spam email.

 

There are already technologies marketed to sniff out deep fakes, whether images or audio. And as long sought after by creators and artists everywhere, experiments in digital signatures and tokenization are seeking new ways to verify that whatever being viewed is authentic.

 

All this being said, the overreaction has perhaps served a good purpose. More people are aware of deep fakes now than they would be otherwise. Knowledge about fakes serves as the antidote. It is yet another reminder of the need to think critically, pause, verify, and be open to correction when we are wrong. Concerns about deep fakes focus on the tree and miss the forest.

Taylor Barkley is program officer for tech and innovation at Stand Together.
235|266