Synthetic media — be it text, audio and especially deep fake videos — form what Omer Benjakob correctly labels a “new front” in the battle against fake news: smart and lean online forces established specifically to spread disinformation, to produce oppressive pornographic images, or to disrupt political systems, are now amongst us. Recognizing them is increasingly hard if not impossible to the untrained human eye.
Overall, as most journalistic coverage of the topic tells us, deepfakes — alongside other AI technologies, machine learning, and online neural networks in general — are here and will serve to cast a shadow of technological terror over society. As part of media coverage on this topic, our future is deemed dystopian — humankind has lost the battles to machines and episodes of the TV series “Black Mirror” will pale in comparison with the havoc sowed by technology. In fact, research I conducted with a colleague from the University of Haifa (Yael Oppenheim) has found that most images and narratives that journalists worldwide use to cover these technologies tend to stress destruction, loss, crisis, and fear regarding the future of humanity.
It is, however, important to contextualize this alarmist media frenzy. First, so-called deep fake technology does indeed raise new questions and issues about the way we perceive authenticity: From the documentation of reality, to the recording of facts, and even the meaning of truth are now being called into question. Often it is hard to determine whether a certain video we have just watched or an audio recording we have just heard are real or in fact fake — i.e. synthetic media produced by AI.
The human senses, the source for so much of our knowledge, is rendered obsolete in a sense and even with the existing knowledge does not fully permit us to identify synthetic media with confidence. Moreover, multiple nefarious players online do indeed employ AI technology for malicious purposes, manipulating synthetic texts and images, whether to create a political disturbance or for pornographic content (which, in the end, it seems most common use of the technology online).
But alongside these undoubtedly dangerous developments, there is also another side to the synthetic media story, one that it seems that journalists worldwide tend to ignore, choosing instead to emphasize the sensational, dystopian, and frightening aspects of synthetic media.
If only for the sake of countering this hype it is important to take a moment and recognize that deep fakes are based on technologies that actually have many positive and productive aspects. In the field of health care, for example, researchers have produced synthetic brain MRI images to train algorithms to do disease detection. Researchers in the field of oncology and Alzheimer’s disease have used it to develop ways to produce diagnostic systems based on fake patient data that facilities less invasive tests while also protecting the privacy of patients’ medical records.
Similarly, voice-related deepfake manipulations allow the creation and spread of important social messages. Take, for example, David Beckham’s video, in which he volunteered to help promote awareness for malaria, an illness still affecting tens if not hundreds of millions around the globe. In this deep fake-based video, the soccer star delivers a compelling message in no less than nine languages, though “he” only really speaks one of them.
The remaining languages were synthetically forged into the original video. The fake portions of this video, produced by a group called “Malaria must die” with the help of the synthetic media firm Synthesia were, in this case, necessary, important, and far from dangerous. If anything, they played a key role in helping an aid organization battle the spread of Malaria by raising awareness and funding.
So, why is it that most media coverage still focuses almost exclusively on the negative, dangerous aspects of deepfake technology? This tendency, it appears, is not new. Historically, numerous new technologies were initially perceived with much apprehension and cast in destructive terms: the printing press inspired fears by the church it would lose their followers and was declared a hazardous technology; television gave voice to a similar fear of the loss of social solidarity and the traditional family structure; and deep fakes? They have brought with them prophecies of doom regarding the end of truth, the loss of relabile social documentation, and the destruction of humanism.
Is it true that deepfakes require us to substantially rethink the conditions of our shared sense of social truth and facticity? Yes. Does that mean it is inherently bad, and that we are all doomed? No. And while I have just shown you that there is some good in the use of this technology, we also already know that on the web, sensational content sells—and that clickbaits sell even more. News outlets, like many other businesses vying for our attention online, need to make profit.
But what we need to remind ourselves in this context is that journalists also have an important social role, not only a capitalistic one. By adhering to the dystopian narrative, they are betraying their audiences in two ways: first, by producing a partial, one-dimensional representation of reality, without promoting awareness to complex issues such as the relationship of technology and politics. Instead of creating hysteria, journalists can encourage informed and intelligent discussions on the subject, present a variety of perspectives, address the positive and vital aspects alongside the frightening and dystopian. Thus, journalists can help promote digital literacy instead of fear.
The second and more important reason we need more nuanced coverage is the real need to stress the human component in technology. By only clinging to the frightening technological component, the press actually gives internet users a pass and allows them to shirk their own responsibility. True, deep fakes are scary — they undermine our perception of reality and put into question much of what we knew about documentation and the conditions of factuality. We still do not have well designed artificial intelligence technologies, for example, that can detect deep fake videos. However, it is important to remember that those who produce deep fakes, those who distribute them, those who receive fake information on instant messaging applications and pass it on, those who do not stop to think critically about the image they saw or the recording they heard on Facebook or Twitter, are us.
While we, the citizens, the ordinary people, did not create the algorithm behind deep fakes, nonetheless we continue to take part in its distribution and prevalence. This human aspect and the civil responsibility it should be understood through is not echoed strongly enough by the press — we have responsibility, a cultural and political role to play at this current turning point in the history of the relationship between humans and the machines they create.
Along with journalists who cover the issue, we all need to assume more responsibility, be more critical, and remember that behind much of the content that we consume online nowadays there is an array of political and social interests that helped bring it into existence. Interests such as these are not a new thing, even if deep fakes are. It does not mean that humanity will be destroyed, it just means that all of us – journalists and media consumers – need to be more critical.
Dr. Aya Yadlin-Segal is a senior lecturer in the Department of Politics and Communication at Hadassah Academic College