How Will Deep Fakes Affect the Field of Journalism?
There were many wonderful speakers and presentations during Spring Immersion 2019, but the one that really caught my attention more than all others was the presentation given by Professor Nina Brown, titled “I know This Much is True: Deep Fakes and the Law”.
Professor Brown discussed the artificial intelligence-based technology that uses advanced algorithms and large sets of data to create fake outputs that look real. Using this technology, audio, images, video and even prose can be made to look genuine, as if the person in the photograph, audio or video is real and is saying or doing the things purported in the video.
The name “Deep Fakes” comes from deep learning, a form of machine learning using data representations. Deep fakes make use of enormous sets of data to be able to build good photorealistic images, videos and audio for a believable fake.
There are a few ways in which deep fakes are created, the most common being Face Swap. This is the easiest and most common type of deep fake, in which a person’s face is transposed unto another’s body, to make it look as if the person is doing or saying the things claimed in the image or video. This type of deep fake is often used on very famous and powerful people like heads of state, corporate chiefs and celebrities who already have enormous amounts of their data- including photos, audio and video- online. This is the type of deep fake used in the examples discussed by Professor Brown, such as the Jordan Peele deep fake of former President Barack Obama1, the video which transposed actor Steve Buscemi’s face on the face of actress Jennifer Lawrence2, or the deep fake using actor Nicolas Cage’s3 image on a variety of images and videos.
A more advanced deep fake method is what is known as Generative Adversarial Networks (GAN)
What is GAN? According to Skymind4, an AI infrastructure-building technology firm, GANs are “deep neural net architectures comprised of two nets pitting one against the other (thus the “adversarial).” GANs can confuse AI and make it see non-existent objects, to trick the AI into believing a particular output as authentic. GANs use at least two neural networks- one is the generator of the data sets used to build the deep fake; the order is the discriminator that builds the deep fake. Both nets try to outsmart the other.
The generator, serving as the authenticator, tries to spot the “fakes” created by the discriminator. The discriminator in turn does its best to produce convincing output that could deceive the generator and pass for “real” based on the original data sets from the generator. There is a loop between the two nets and this loop provides continuous information that helps the algorithms and machines to learn and to become better at generating and discriminating. Over time, the idea is to get the discriminator really good at producing outputs that is almost impossible for the generator to pick out, and also for the generator to get so good that it can pick out the fakes from the discriminator, no matter how good it gets in creating the fakes. So, both nets are learning and getting better with time.
The ramifications of deep fakes are manifold and potentially dangerous. Creatively manipulating images and videos is nothing new- Hollywood has been doing it for decades, like Professor Brown pointed out in her presentation. The 1994 block buster movie Forest Gump had some scenes that showed the protagonist, Forest, apparently shaking hands with President John F. Kennedy.
“we are in an existential battle for truth in the digital domain”
But there are more serious ramifications when it comes to deep fakes, particularly where safety and national security are concerned. Deep fakes can be used to manipulate photographs and videos that can trigger international incidents like war, or a crash in the stock market. Intelligence and other national security agencies worry about the many ways that adversarial parties can manipulate situations using deep fakes. It can greatly influence intelligence gathering and even the ways wars are fought. What if an adversarial power can corrupt or manipulate data sets that change the appearance of Google Earth and other data containing information about the earth’s landscape? Can Google Earth be altered in a nefarious manner that misleads battle planners and strategists on the opposing side? How does that affect how war is waged?
In an interview with Defense One, Andrew Hallman, the Deputy Director of the CIA expressed his concerns with deep fakes as it relates to security, saying “We are in a battle for truth in the digital domain…Because that’s frankly the digital conflict we’re in, in that battle space…This is one of my highest priorities.”5
Congress has been working on ways to stay ahead of potential deep fakes threats, but so far they have been unable to pass a workable and helpful legislation. A 2018 bill S3805 introduced by Senator Ben Sasse of Nebraska stipulated that it would be unlawful to “create, with the intent to distribute, a deep fake with the intent that the distribution of the deep fake would facilitate criminal or tortious conduct under Federal, State, local, or Tribal law…6
This bill failed to become law, but it’s an example of the interest Congress has in regulating deep fakes. Opponents to this bill were mostly Hollywood studios and creatives who argued that there are already laws in the books prohibiting the unlawful impersonation of someone’s image or voice or using it in any unauthorized fashion, and that the bill would be an infringement on the First Amendment rights of the creatives and their audiences. This is particularly important to professionals like Journalists, who work in the field of communications.
My Specialization in this degree program is Journalism, and I hope to report on some of the most impactful stories of our time. Deep fakes is potentially a major challenge to every industry, particularly in communications and journalism. Journalism at its roots is the search for and reporting on the truth; deep fake can make finding and authenticating that truth very difficult. And in the age of fake news and incredulity on the part of the news consumers, deep fake is a million miles down the rabbit hole.
The threat of deep fake has led many news organizations to find new ways to authenticate the information they receive, as increasingly, their reputations are on the line and they can’t afford to put out a string of unauthenticated and invalid stories. This is why Reuters7, the news behemoth, developed its own in-house program to train its reporters on how to spot deep fakes. Reuters created an in-house deep fake video to use it as a training tool for its reporters on the latest technologies and to test their internal verification processes.
Deep fake is a concern for every industry, in fact for everyone. Our modern society is an information society, heavily reliant on quickly receiving vast amounts of information on demand, on the go, at our finger tips. But what if that information is fake? Worse, what if we loose confidence in the information we receive, and consider most information fake, even though they’re real? What if deep fakes cause us to lose trust in information and in each other? What if deep fakes successfully alter not just our perception of truth, but also of reality? How will that affect daily life- how we live and work and relate to each other? What if the technology gets so good that large amounts of data sets are no longer needed to produce a good and realistic deep fake? Then it’ll become easier and cheaper to deceive the public- any amateur could pull it off. What if a deep fake about you is so good that even you are deceived, and can’t even decipher if the image or video is doctored?
As a professional in the field of communications, deep fake is a genuine concern and a ubiquitous potential threat that can negatively affect the industry. Yes, there might be some constructive uses for deep fakes; but right now, in the field of communications, particularly digital communications, there seem to be more threats than treats.
BIBLIOGRAPHY
- https://www.youtube.com/watch?v=cQ54GDm1eL0
- https://www.youtube.com/watch?v=iHv6Q9ychnA
- https://www.youtube.com/watch?v=BU9YAHigNx8
- A Beginner’s Guide to Generative Adversarial Networks (GANs)https://skymind.ai/wiki/generative-adversarial-network-gan
- https://www.defenseone.com/technology/2019/03/next-phase-ai-deep-faking-whole-world-and-china-ahead/155944/
- https://www.congress.gov/bill/115th-congress/senate-bill/3805
· S.3805 – Malicious Deep Fake Prohibition Act of 2018115th Congress (2017-2018)
Weddell, Kaveh. (January 2019) Lawmakers plunge into “deepfake” war. https://www.axios.com/deepfake-laws-fb5de200-1bfe-4aaf-9c93-19c0ba16d744.html
- Baker, Hazel. (March 2019.) Making a ‘deepfake’: How creating our own synthetic video helped us learn to spot one. Retrieved from: https://www.reuters.com/article/rpb-deepfake/making-a-deepfake-how-creating-our-own-synthetic-video-helped-us-learn-to-spot-one-idUSKBN1QS2FO
- Solmon, Joan. (April 2019) .Deep fakes may try to ruin the world. But they can come for you too. Retieved from:https://www.cnet.com/news/deepfakes-may-try-to-ruin-the-world-but-they-can-come-for-you-too/
