Skip to main content

A New Hermeneutics of Suspicion? The Challenge of deepfakes to Theological Epistemology

In this presentation, I provide an introduction to deepfakes and related machine-learning technologies for theologians, assess their danger as well as potential uses, and advocate for developing a spirituality of critical empathy in response.
A New Hermeneutics of Suspicion? The Challenge of deepfakes to Theological Epistemology
·
Contributors (1)
Published
Nov 08, 2019

A New Hermeneutics of Suspicion? The Challenge of deepfakes to Theological Epistemology

“Jesus said to him, ‘Have you believed because you have seen me? Blessed are those who have not seen and yet have come to believe.’” (John 20:29)

What does it mean to see and yet not to believe? Is this inverse of the Johannine pericope of ‘doubting Thomas’ a virtue or vice in the age of synthetic videos better known as deepfakes? How will the growing use of deepfake videos affect theological epistemology, that is, our ability to discern the truth about God, ourselves, and our neighbors?

In this presentation, I provide an introduction to deepfakes and related machine-learning technologies for theologians, considering their potential use and misuse in theology. As we explore the topic, we will find that the phenomenon of deepfakes brings us deep into the theology of mediation, pushing us to ponder the relation between εικών and είδος, icon and idea. As Christians, we learn that appearances can be deceiving, misleading, or at least obscure underlying reality. The paradox of the form that is other than its substance is at the heart of Christian faith, from the mystery of the Lord’s Supper to the crisis of the Cross.

Paul Ricœur introduced the term “hermeneutics of suspicion” as a counterpart to the “hermeneutics of faith.” Whereas a “hermeneutics of faith” seeks to discern and bring the meaning of a text to light, a “hermeneutics of suspicion” questions its meaning, looking beneath the surface for repressed or suppressed significance. “This hermeneutics is not an explication of the object, but a tearing off of masks, an interpretation that reduces disguises.”1 While he apparently changed his mind over the course of his long career about the relation between these hermeneutical modes,2 when he discussed them in Freud and Philosophy he argued that they are necessary and complementary. As with Hegelian dialectic, suspicion turns into its opposite, namely, faith, when seeking for meaning behind the mask. Riceour famously described this so-called “second faith” as “postcritical” or a “second naïveté.”3

Will the proliferation of deepfakes push us as a society to become more critical about the mediascape about us? Will this critical perspective lead us into a postcritical stance that opens onto vistas of meaning? Or will we simply become more suspicious, refusing to believe the evidence of our eyes even when all signs indicate we are facing the truth?

Three Scenarios

When people learn about the technology behind deepfakes, they tend to become fearful and for good reason: the origin story of deepfakes is murky and unseemly, starting with an anonymous member of reddit who called himself ‘deepfakes’ and applied off-the-shelf machine learning techniques to swap the faces of celebrities into pornographic videos. As Samantha Cole, senior staff writer at Motherboard and Vice, explained in twin articles from late 20174 and early 2018,5 a community has developed around the production of these videos.

Creating deepfake videos to blackmail people is also on the horizon. As Samantha Cole writes, “It isn’t difficult to imagine an amateur programmer running their own algorithm to create a sex tape of someone they want to harass.”6 The majority of states now have laws against the circulation of ‘revenge porn,’ that is, of sexually-explicit images or audiovisual records.7 While these laws criminalize nonconsensual sharing of sexually-explicit photographs or videos, their application to synthetic images and videos is another question. Internet trolls have used photoshop to create and spread degrading images of women for more than a decade, emerging as a public issue during the harassment of technologist Kathy Sierra in 2007.8 The pain and suffering caused by the release of synthetic pornography is no less real than organic pornography. And the more realistic it becomes the worse the impact on its victims. For this reason, producers of deepfake pornography may find it lucrative to threaten people with its release. In fact, this kind of blackmail has already started to take place.9

If deepfake pornography threatens to cause victims personal anguish and social harm, the dissemination of fake videos in charged political situations might prove fatal. In October 2019, for instance, four protesters in Bangladesh died in riots sparked by a post on Facebook criticizing the Prophet Mohammed.10 The Hindu citizen who supposedly published the post complained that his account had been hacked. In fact, it turned out that police corroborated his complained and arrested the hackers. The quick action did not stop the riots, however. In regions where there is little trust between communities, people who see incendiary videos may act without waiting for confirmation of their veracity (or falsity). While digital forensics might eventually prove that a video had been doctored, such evidence would come too late to prevent violent disturbances on the ground. The production of deepfake videos about political figures has become a popular sub genre of the deepfake community; the channel r/SFWdeepfakes/ on reddit features synthetic videos of Donald Trump, Barak Obama, and Hilary Clinton, among others. The majority of these videos function as parody, inserting Trump into the film The Wolf of Wall Street or having Obama sing and dance to the tune of Spooky Scary Skeleton. These applications of the technology of deepfakes are innocuous, clever, and funny, but more sinister applications could make real political impact. As with foreign interference in the 2016 presidential elections in the United States, no straightforward remedy exists for undoing the immediate social and political aftermath of faked images and videos.

Coverage of deepfake videos tends to dwell on their negative potential. Given their origins, potential for harassment, and potential for spreading disinformation, the media’s alarm over deepfakes seems justified. As with any new technology, however, the advent of deepfakes comes with positive and negative potential. As the authors of Blown to Bits, a textbook used in high school and college level courses in computing across the country, opine, “the key to managing the ethical and moral consequences of nourishing economic growth is to regulate the use of technology without banning or restricting its creation.11 In the introductory computer science course I teach at Vanderbilt University, The Beauty and Joy of Computing, I cover the moral panics that periodically sweep through the media, school boards, and Congress, ranging from the worries about children’s exposure to pornography that led to the passage of the Communications Decency Act of 1996 to the battles over copyright and fair use that prompted the Stop Online Piracy Act (SOPA) and PROTECT IP Act (PIPA) in 2012. As we face the prospect of legislation after the passage of the Deepfake Report Act of 2019,12 will it be possible for us to overcome anxieties about the genuine threats deepfakes pose to consider and safeguard its positive applications?

As theologians, we have particular reasons to take care. We must think beyond the economic, legal, and even ethical dimensions of deepfakes to consider their spiritual implications. We also have to avoid falling into our socially-assigned role of conservators of the status quo even as some hyperbolically proclaim that “AI may be the greatest threat to Christian theology since Charles Darwin’s On the Origin of Species.”13 The best way to assess the spiritual impact of any new technology is to spend time exploring its potential for good and bad, examining its components, and exercising our theological imagination.

The majority of publications about deepfakes address their potential for spreading disinformation. But the technology also has positive aspects. Deepfakes can serve legitimate ends by bridging cultural divides and forging emotional connections. However, boundaries between such valid uses and virtual creepiness may be difficult to discern. In what follows, I present three brief scenarios, grounded in contemporary technology, for us to consider.

Editing Sermons

Sharing audio or video recordings of sermons online is common today. If you are like me, you prefer not to listen to the sound of your own voice. At Vanderbilt, I am one of the team members who collectively produce Leading Lines, a podcast about educational technologies. I am grateful that our team also includes Rhett McDaniel, an educational technologist who also happens to have a M.S. in Music Technology. Rhett skillfully edits every episode to smooth over verbal stumbles and tics. If you are a pastor, having your worship services broadcast increasingly comes with the territory. But, from my experience, mainline churches do not edit the recordings they put online, making them difficult at times to listen to. If you stumbled while reading a biblical passage, made an impromptu joke that fell flat, or neglected to mention one of the volunteer leaders of vacation Bible school, your gaffe will be memorialized in the congregation’s digital library.

A company called Descript markets audio editing software that makes it straightforward and easy to edit out mistakes, pauses, and other problems in podcasts and other kinds of recordings. Descript generates a transcript from the audio and, by keeping text and speech in synch, allows you to edit the audio by changing the transcript. So, if you want to get rid of that bad joke, you strike it from the transcript and it vanishes from the audio too. Of course, while Descript provides an attractive interface, it does not differ qualitatively from other audio editing and transcription tools, which also provide sophisticated software for correcting errors.

What makes Descript distinctive is the integration of a technology called Lyrebird to enable audio overdubbing. The researchers collaborating on Lyrebird highlight similar scenarios for its use. Drawing on an area of study called “text-informed speech inpainting,” Lyrebird uses deep learning techniques to allow editors to insert new text into the transcription and to produce new audio in the recording that blends seamlessly with the words that came before and after.14 In other words, if you forgot the name of that volunteer, you do not have to live with the mistake–by editing the transcript, you insert mention of that person into the audio and, to all the world who listens to the recording, it sounds as natural as it would have had you said it on Sunday morning.

Preaching in Tongues

What about using deepfake to bridge linguistic divisions in congregations? In churches serving immigrant communities, pastors commonly hold services of worship in different languages. There may, for instance, be an early morning service in Spanish and a late morning service in English. A Methodist congregation in my neighborhood in Nashville holds simultaneous services of worship in English in the main sanctuary and Karen, English, and Thai in the community center next door. While accommodating the linguistic difference of parishioners is admirable, maintaining separate services inevitably leads to divisions, however benign, within the congregation. The alternative, combining worship services with the assistance of simultaneous translators, is equally problematic because of its cost and its potential for increasing the length of the service. What if we could draw on deep learning to create versions of the same sermon in English and any other language spoken in the congregation?

Synthesia is a company based in London that specializes in what it terms “video synthesis technology.”15 Synthesia uses “Generative AI” to “reduce the cost, skill and language barriers to creating, distributing and consuming content.” On its website, Synthesia also highlights its ethical commitments, promising to “never re-enact someone without explicit consent” and to work with partners of all kinds “to develop best practices” on the use of “video synthesis technology.”16

The Synthesia website features exemplary stories about the potential of “video synthesis.” Consider the story of a cross-cultural marriage proposal using Synthesia’s technology: “I Used AI To Propose To My Wife In Her Native Language.”17 In the video, a white man from the United States agrees to ask his Chinese spouse to marry him again, this time proposing in Chinese. How can he pull off this feat without speaking Mandarin? Technologists from Synthesia film him delivering the proposal in English, creating a computer model of his facial expressions as he speaks. A Chinese-speaking vocal actor then reads the translation of his proposal in Mandarin. The technology then maps the vocal sounds and facial expressions onto the man’s face, allowing him to “speak” to his spouse in her native language.

Museum Informatics

The emerging field of museum informatics seeks to inform and engage visitors about works of art through new media and digital technologies. Developments in augmented reality will make the current audio tours with the bulky headsets and players seem woefully dated. Imagine coming across Lucas Cranach the Elder’s portrait of Martin Luther. By wearing AR headsets, you might see Luther turn to face you and begin to describe his ongoing efforts to reform the church, his intention to translate the Bible into German, and his sorrow at the loss of his daughter, Elizabeth. Through augmented reality, the portrait becomes a window into another time, another place, educating viewers about the people, places, and events they find depicted in oil.

The ability to produce this kind of animation is not novel. Using game development platforms like Unity or Unreal Engine, skilled animation artists create and animate sprites from static images. But deep learning promises to automate the process and make it scalable. In “Few-Shot Adversarial Learning of Realistic Neural Talking Head Models,” a team of scientists from the Samsung AI Center in Moscow and the Skolkovo Institute of Science and Technology have created algorithms for generating animated representations from photographs. What is innovative about their technique is the ability to produce these animations from a single image: “Our system can generate a reasonable result based on a single photograph (one-shot learning), while adding a few more photographs increases the fidelity of personalization.”18 The team used their deep learning algorithms to generate animated models from images of the Mona Lisa, Fyodor Dostoevsky, Albert Einstein, and Marilyn Monroe. The title of the article in Art News covering the achievement encapsulates the response from curators and art historians, “Russian Researchers Used AI to Bring the Mona Lisa to Life and It Freaked Everyone Out.”19

Technology of deepfakes

How do deepfakes work in practice? The technology of deepfakes belongs to a subfield of machine learning called “deep learning.” As Gary Marcus succinctly defines it, “Deep learning…is essentially a statistical technique for classifying patterns, based on sample data, using neural networks with multiple layers.”20 In less abstruse terms, the goal is to take a set of inputs and map its contents to a labeled set of outputs.21 That is, we have a bunch of unlabeled data that we want to label and our job is to draw lines between the data and the labels. As Marcus indicates, a typical application of deep learning is to take a digitized set of manuscripts and to map the handwritten letters to one of a few dozen canonical alphabetic representations. The thing with deep learning is that the lines are not drawn directly from the input set to the labeled data. Rather, the lines from the initial data pass through many interim layers until they converge on the labels. So-called forward and backward propagation algorithms allow for input and output layers to communicate through the sometimes vast set of interim layers, making adjustments between the “neurons” (or provisional mappings) until the fit between inputs and outputs becomes satisfactory. As Andrew Trask puts the point in Grokking Deep Learning, “It’s like a giant game of telephone—at the end of the game, every layer knows which of its neurons need to be higher and lower….”22

The development of a technique termed “Generative Adverserial Nets” has reduced the computational expense to produce deepfake videos.23 The leading idea is to pit two deep learning models against one another. The first model (the “generative model”) presents its output data to the second model (the “discriminative” model), which seeks to classify that data as belonging as a product of the generative model or a sample of the data-to-be-modeled (i.e. the training data). As the authors of the 2015 publication that introduced the concept explain, “The generative model can be thought of as analogous to a team of counterfeiters, trying to produce fake currency and use it without detection, while the discriminative model is analogous to the police, trying to detect the counterfeit currency. Competition in this game drives both teams to improve their methods until the counterfeits are indistinguishable from the genuine articles.”24 The innovative aspect of this technique is that both generator and discriminator are learning as the game proceeds. The generator continues to create data distributions that approximate the training data more closely and the discriminator learns to distinguish between the generator and the training data more accurately. The competition between the models concludes when, as the analogy suggests, the generator produces models that the discriminator can no longer reliably distinguish from the training data. In the current state of the art, GANs are not guaranteed to bring generator and discriminator into equilibrium; they sometimes oscillate between suboptimal solutions. Many pragmatic techniques have been put forward to prevent the models from collapsing before converging.25

Democratization of Manipulation

The ability to produce image-to-image translations is not new.26 Major movie production studios already have technologies to produce realistic body doubles. As Patrick Shanley and Katie Kilkenny wrote in The Hollywood Reporter, “Hollywood has long swapped faces — just using different tech.”27 For example, studios have used these methods to create continuities in fictional universes like Star Wars, bringing back characters like Princess Leia and Grand Moff Tarkin after the deaths of Carrie Fisher and Peter Cushing.28 If special effect studios in Hollywood possessed the technology for creating synthetic videos, then intelligence agencies in the United States and abroad must have too. After all, intelligence agencies around the world have produced propoganda, manipulated media, and planted ‘false flags’ for decades. Among the materials from the National Security Agency that Edward Snowden released in 2014 is a document listing the British Government Communications Headquarters’ (GCHQ) digital manipulation tools.29 While the ability to alter digital video may have been around for a while, the organizations and agencies that could pull off these transformations were few and limited. What is new about ‘deep fakes’ is the democratization of video manipulation.

Philosophy and Cognitive Science of Deepfakes

Over the Labor Day weekend, I attended DragonCon, an annual gathering of more than 85,000 fans of science fiction, fantasy, gaming, and other forms of contemporary geek culture. Alongside sessions devoted to exploring Dr. Who, Harry Potter, Star Trek, and the latest anime, there is a Dragon Con Skeptic Track devoted to “critical thought, extraordinary claims, and promotion of good science.” This year, the track sponsored a session titled “How Deep Is Your Fake?” on the challenge of identifying and debunking deepfake videos. The presenter, Teddi Fish, who cosplayed as Teddy the Flying Spaghetti Monster while giving her talk, provided an overview of the state of the problem from a technical as well as a social perspective, concluding with a slide advising “Question before you share. Question that with which you agree. Stay skeptical.” The advice sounds laudable and, certainly, nobody wants to be taken in by fraud.

According to Karen Hao, our anxiety about being mislead by deepfakes may be creating the negative effects we are seeking to avoid. In “The Biggest Threat of deepfakes Isn’t the deepfakes Themselves,” she notes that overly skeptical viewers have already come to regard authentic videos as potential fakes, leading to serious political consequences.30 In other words, we are becoming so concerned about the potential of fraudulent video that political agents are using that anxiety against us, discrediting videos as misinformation and ‘fake news.’ As Hao quotes Aviv Ovadya, an expert in misinformation: “What [disinformation actors] really want is not for you to question more, but for you to question everything.”31

Skepticism runs counter to core principles of human psychology and information economics. As Fish herself remarked during her presentation at DragonCon, “human beings are wired so that what we see sticks in our brain as something that is, in fact, reality.”32 If we doubt everything, our ability to act degrades. A major reason we have trademarks and service marks is to save us the trouble of evaluating sources.

As the American pragmatists taught us more than 150 years ago, absolute skepticism is a practical impossibility. It is not possible to suspend belief in all your convictions simultaneously. In “Some Consequences of Four Incapacities” (1868), Charles Sanders Peirce argued that Cartesian skepticism foundered due to this practical inability. Peirce noted that Cartesianism “teaches that philosophy must begin with universal doubt,” but countered that such a standpoint is self-deceptive.

We cannot begin with complete doubt. We must begin with all the prejudices which we actually have when we enter upon the study of philosophy. These prejudices are not to be dispelled by a maxim, for they are things which it does not occur to us can be questioned. Hence this initial skepticism will be a mere self-deception, and not real doubt; and no one who follows the Cartesian method will ever be satisfied until he has formally recovered all those beliefs which in form he has given up.33

A problem with advocating sweeping doubt about the veracity of every digital image or audiovisual recording we encounter is that, if we followed that advice, we would quickly lose our ability to act. We cannot be skeptical about everything we see. At best, we can train ourselves about when to become skeptical. To become skeptical about something we thought we knew, as Peirce indicated, we need to have genuine grounds for doubting its veracity; cultivating artificial doubt will not lead us to the truth about what we are seeing.

If casting doubt on everything we see until it’s proven true does not constitute a workable strategy, what can we do to prevent ourselves from falling for misinformation? From the standpoint of cognitive science, the task may actually be more difficult than it appears. In “Believing that Humans Swallow Spiders in Their Sleep: False Beliefs as Side Effects of the Processes that Support Accurate Knowledge,” Elizabeth J. Marsh, Allison D. Cantor, and Nadia M. Brashier of Duke University examine how errors become integrated into our “knowledge base” through what they term “adaptive processes.” These processes “normally support accurate knowledge, but sometimes backfire and allow the introduction of errors into the knowledge base.”34 In their article, they review five such adaptive processes. Of these, I’d like to highlight several that connect directly with the question of deepfakes.

First, the authors note that disbelieving something we learn takes more cognitive effort than believing it.35 We are hardwired to accept novel information as true; it takes additional mental effort to flag it as false. As they point out, this strategy makes sense given that human beings evolved in an environment where perceptions are generally grounded in the truth. Of course, we do have cognitive systems for rejecting perceptions as untrue. However, psychologists have demonstrated that short-circuiting these higher-level evaluative systems is relatively easy.36 As we distractedly scrolled through social media feeds in 2017 during Hurricane Harvey, how many of us paused to reflect on the likelihood of a heavily-shared image of a shark swimming along one of the flooded aqueducts? We assume many fewer of those who saw the image on Twitter later read Linda Qiu’s admonition in the New York Times, “Don’t believe it. This fake image is an old hoax that circulates routinely after major hurricanes.”37

Another “adaptive process” that inhibits our ability to screen out errant beliefs is what they term the “fluency-based heuristic for judging truth.”38 The effect boils down to a confusion between our ability to process information and its truth value. If we can recall something readily to mind, we are more likely also to judge it as true. As the authors indicate, this effect can be exploited by advertisers who pay to expose people in certain markets repeatedly to certain claims, making it easier for them to remember those claims and, hence, to assume their truth. Interestingly, pairing an image with factual assertions amplifies people’s tendency to accept those assertions, even if the image is factually unrelated.39

A final “adaptive process” worth noting is that we often accept “partial matches” when making connections between facts.40 The authors note that speech communication is fraught with parapraxis and other forms of verbal disfluencies. When someone is struggling with communicating an idea, we generally try to make sense of what that person is saying, filling in the gaps while reassuring him or her that we “know what you mean.” But, as it happens, employing this strategy also means that we tend to gloss over factual errors. The authors point to an effect that Thomas D. Erickson and Mark E. Mattson described as the “Moses Illusion” to illustrate this tendency. As Erickson and Mattson demonstrated, when asked “‘How many animals of each kind did Moses take on the Ark?’ most people answer ‘two,’” despite knowing that Noah built the Ark, not Moses.41 The etiology of this effect is not certain, but Marsh, Cantor, and Brashier follow Erickson and Mattson by assuming that “monitoring [for errors] takes effort, and accepting ‘good-enough’ representations is a shortcut that normally works.”42

The upshot of this research is to demonstrate that our cognitive processes balance efficiency and accuracy when assimilating new information. To my knowledge, researchers have not yet studied how these adaptive processes will affect our ability to evaluate the veracity of deepfake videos, but we might readily imagine that their producers will draw on this research to make them slip past our cognitive defenses. While adopting a skeptical attitude toward what we see may help us to screen out errors, doing so will also slow down our assimilation of new information.

A Phenomenology of Mediation

Paul Ricœur described the hermeneutics of suspicion as a tearing off of the masks. I prefer to describe this mode of interpretation more gently as perceiving through the veil, or perhaps glimpsing through the veil. “How beautiful you are, my love, how very beautiful! Your eyes are doves behind your veil” (Song of Solomon 4:1).

Philosophically, the notion of the physical image as veil takes central place in the phenomenological philosophy of Edmund Husserl. In the fifth chapter of his Cartesian Meditations (1931), Edmund Husserl explores the phenomenology of intersubjectivity.43 Husserl tackles the question of our perception of the other. How do we experience another consciousness in the word of objects? The experience of an other differs from the experience of an object, but we never encounter the ego of the other directly. If we did, Husserl wrote, the other would become ourselves. To maintain the distinction between ourselves and the other, we encounter the other through some mediating form, whether a physical body, a voice, a moving image. Husserl describes the intuition that an ego exists behind the form as a “mediate intentionality.” As he explains in §50,

A certain mediacy of intentionality must be present here, going out from the substratum, “primordial world” … and making present to consciousness a “there too,”, which nevertheless is not there itself and can never become an “itself-there.”44

Husserl develops the concept of apperception to articulate this form of mediated intentionality. In perceiving the other, we perceive first the body of the other and then, by way of analogy, the “I” of the other. The apperception of the other does not function as a temporal two step process whereby we first see a body, and then analogize to the presence of an ego. The body and the ego become paired in apperception, but nevertheless remain conceptually distinct and never fused or collapsed. The veil of mediating form cannot be stripped away, but we can perceive through its texture the other ego who stands before us.

Husserl described the apperception of the other, that is, the perceiving of a spiritual alter “ego” through the veil of physical presence, as a “transcendental theory of experiencing someone else” or “a transcendental theory of so-called ‘empathy [Einfühlung].’”45 The role empathy plays in constituting our perception of the other has long been the subject of philosophical discussion and disagreement.46 For our purposes, what is crucial is the distinction between intentional experience of the physical presence of the other and empathetic perception of the spiritual “I” of the other. For this distinction allows us to imagine exercising emphathy to perceive a spiritual other with a completely different surface form than our own. Or, conversely, confronting a form that, though familiar in its external features, proves impenetrable in fact. A form that does not lead to a spiritual reality, no matter how empathetically we project ourselves, is empty and vacuous. Strangely, Husserl’s meditation on intersubjectivity from 1931 leaves us close to Alan Turing’s reflections on artificial intelligence from 1950. In the ‘imitation game’ that Turing described in Computing Machinery and Intelligence, the goal was similarly to discern by exercising empathy whether the messages you are receiving through a physical barrier came from a spiritual “I” or a calculating machine.47 As we find the surfaces of perception becoming increasingly diverse and deceptive, we may find that empathy, as conceived by the philosophers in the phenomenological tradition, becomes our key to exposing or exploring the spiritual dimensions of deepfakes.

A Spirituality of Iconoclasm

The growing alarm over the impact of deepfake videos correlates with the media saturation of contemporary culture. A partial solution to the threat of deepfake videos would simply be to remove ourselves from the theatre of contemporary life, stepping away from Times Square into quieter backstreets. Jaron Lanier has delivered modern day jeremiads against social media, arguing that social media has deleterious effects not only on our ability to discern the truth, but to cultivate our souls.48 Certainly, reducing our exposure to social media, where deepfake videos increasingly proliferate, reduces our personal vectors of attack.

A spirituality of iconoclasm imposes distance from the cascading series of images that surround us online in order to cultivate empathy. The purpose of fostering this remove from visual culture is not to reject images wholesale as false representations, but to consider them with greater intentionality, thoughtfulness, and perspicacity. By fostering a reserve, whether ironic, intellectual, or spiritual, toward visual media, we gain facility in reading and interpreting their cultural logic. This dare-we-say philosophical reserve toward visual culture has roots in Platonism, as Edith Wyschogrod pointed out.

In the new age of images there are only images. Could it not be argued that the promiscuity of the image was already present in Plato’s philosophy? From the Platonic standpoint, art objects, shadows, and the reflections of things are the wanton and wild images that escape regimentation by the logos.49

In deepfakes, far from being absent, logos saturates the image. The synthesis of disparate objects, the swapping of body parts, switching voices, juxtaposition of discordant elements all reflect the vision of a creator, carried out through data, algorithms, and processing power. The overabundance of logos in such videos is perhaps the best ‘tell,’ as the design is so perfect that it becomes uncanny. But where does this saturating logos lead? To the void or to a genuine spiritual “I” communicating through its computational veil? Only empathetic intuition may tell.

What would an epistemology of iconoclastic empathy look like in practice? Maybe a little science fiction by way of conclusion might assist our imaginations. In his story “Liking What You See: A Documentary” (2002), Ted Chiang imagines a condition called ‘calliagnosia’ that disrupts the recognition of beauty. Chiang builds the narrative from documentary reports of various agents, ranging from college students to neuroscientists, to explore the advantages of limitations of taking a drug to induce calliagnosia. A major question of the story is why physical beauty should shape our perception of the spiritual “I.” As a student in the story avers, “Calli doesn’t blind you to anything; beauty is what blinds you. Calli lets you see.”50 In like manner, an epistemology of iconoclastic empathy teaches us to see by training us to regard surface appearances as veils that obscure or, possibly, reveal.

  • Clifford B. Anderson

Works Cited

Abelson, Hal, Ken Ledeen, and Harry Lewis. Blown to Bits: Your Life, Liberty, and Happiness After the Digital Explosion. Upper Saddle River, NJ: Addison-Wesley Professional, 2008.

Ball, James. “GCHQ Has Tools to Manipulate Online Information, Leaked Documents Show.” The Guardian, July 2014.

Brébisson, Alexandre de. “How Imputations Work: The Research Behind Overdub.” https://www.descript.com/post/how-imputations-work-the-research-behind-overdub, September 2019.

Chiang, Ted. Stories of Your Life and Others. New York: Knopf, 2010.

Cole, Samantha. “AI-Assisted Fake Porn Is Here and We’re All Fucked.” Vice, December 2017.

———. “We Are Truly Fucked: Everyone Is Making AI-Generated Fake Porn Now.” Vice, January 2018.

Dafoe, Taylor. “Russian Researchers Used AI to Bring the Mona Lisa to Life and It Freaked Everyone Out. See the Video Here.” Artnet News. https://news.artnet.com/art-world/mona-lisa-deepfake-video-1561600, May 2019.

Electronic Privacy Information Center. “State Revenge Porn Policy.” https://epic.org/state-policy/revenge-porn/, 2019.

Erickson, Thomas D., and Mark E. Mattson. “From Words to Meaning: A Semantic Illusion.” Journal of Verbal Learning and Verbal Behavior 20, no. 5 (October 1981): 540–51. doi:10.1016/S0022-5371(81)90165-1 .

Goodfellow, Ian. “NIPS 2016 Tutorial: Generative Adversarial Networks,” December 2016.

Goodfellow, Ian J., Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. “Generative Adversarial Networks,” June 2014.

Gupta, Swati, and Helen Regan. “Four Dead in Bangladesh Riot over Facebook Post.” CNN. https://www.cnn.com/2019/10/21/asia/riot-deaths-facebook-post-intl-hnk/index.html, October 2019.

Hao, Karen. “The Biggest Threat of Deepfakes Isn’t the Deepfakes Themselves.” MIT Technology Review. https://www.technologyreview.com/s/614526/the-biggest-threat-of-deepfakes-isnt-the-deepfakes-themselves/, 2019.

Husserl, Edmund. Cartesian Meditations: An Introduction to Phenomenology. Translated by Dorion Cairns. The Hague: Martinus Nijhoff, 1960.

Kanter, Steven. “I Used AI to Propose to My Wife in Her Native Language,” December 2018.

Kemp, Luke. “In the Age of Deepfakes, Could Virtual Actors Put Humans Out of Business?” The Guardian, July 2019.

Lanier, Jaron. Ten Arguments for Deleting Your Social Media Accounts Right Now. New York: Henry Holt and Co., 2018.

Liotta, Edoardo. “Student in Mumbai Arrested for Editing Girl’s Face onto Porn and Threatening to Share It.” Vice, October 2019.

Luo, Zhida. “‘Seeing-In’ and Twofold Empathic Intentionality: A Husserlian Account.” Continental Philosophy Review 51, no. 3 (September 2018): 301–21. doi:10.1007/s11007-017-9432-6 .

MacKinnon, Rebecca. Consent of the Networked. New York: Basic Books, 2013.

Marcus, Gary. “Deep Learning: A Critical Appraisal.” arXiv:1801.00631 [Cs, Stat], January 2018. http://arxiv.org/abs/1801.00631.

Marsh, Elizabeth J., Allison D. Cantor, and Nadia M. Brashier. “Believing That Humans Swallow Spiders in Their Sleep: False Beliefs as Side Effects of the Processes That Support Accurate Knowledge.” In Psychology of Learning and Motivation, edited by Brian H. Ross, 64:93–132. Academic Press, 2016. doi:10.1016/bs.plm.2015.09.003 .

Merritt, Jonathan. “Is AI a Threat to Christianity?” The Atlantic. https://www.theatlantic.com/technology/archive/2017/02/artificial-intelligence-christianity/515463/, February 2017.

Peirce, Charles Sanders. “Some Consequences of Four Incapacities.” Journal of Speculative Philosophy 2 (1868): 140–57.

Portman, Rob. “Text - S.2065 - 116th Congress (2019-2020): Deepfake Report Act of 2019.” Webpage. https://www.congress.gov/bill/116th-congress/senate-bill/2065/text, October 2019.

Qiu, Linda. “A Shark in the Street, and Other Hurricane Harvey Misinformation You Shouldn’t Believe.” The New York Times, August 2017.

Ricoeur, Paul. Freud and Philosophy: An Essay on Interpretation. Translated by Denis Savage. New Haven: Yale University Press, 1970.

Scott-Baumann, Alison. Ricoeur and the Hermeneutics of Suspicion. London: Continuum, 2012.

Shanley, Patrick, and Katie Kilkenny. “Deepfake Tech Eyed by Hollywood VFX Studios.” The Hollywood Reporter. https://www.hollywoodreporter.com/news/deepfake-tech-eyed-by-hollywood-vfx-studios-1087075, April 2018.

Shen, Tianxiang, Ruixian Liu, Ju Bai, and Zheng Li. “‘Deep Fakes’ Using Generative Adversarial Networks (GAN),” 2018, 9.

Trask, Andrew. Grokking Deep Learning. 1 edition. Shelter Island: Manning Publications, 2019.

Turing, A. M. “I.COMPUTING MACHINERY AND INTELLIGENCE.” Mind LIX, no. 236 (October 1950): 433–60. doi:10.1093/mind/LIX.236.433 .

Wyschogrod, Edith. An Ethics of Remembering: History, Heterology, and the Nameless Others. Chicago: University of Chicago Press, 1998.

Zahavi, Dan. Self and Other: Exploring Subjectivity, Empathy, and Shame. 1 edition. Oxford: Oxford University Press, 2015.

Zakharov, Egor, Aliaksandra Shysheya, Egor Burkov, and Victor Lempitsky. “Few-Shot Adversarial Learning of Realistic Neural Talking Head Models.” arXiv:1905.08233 08233 [Cs], May 2019. http://arxiv.org/abs/1905.08233.

Footnotes
50
Comments
12
Kate Ott: This resonates with questions throughout the conference about encounter and events that meet otherness and create knowledge.
Kate Ott: This also relies on authority or value being attributed to the institutional evaluating source in the first place.
Michael Hemenway: I would be curios to hear more about what you mean by theological imagination here.
Michael Hemenway: How common is it for people in any situation to wait for confirmation of veracity to act?
Michael Hemenway: Cliff, is it possible that the evidence of our eyes betrays us at times? I read Levinas as suggesting that seeing is prone to betrayal more than hearing. Is there something other than our senses that arbitrates real/true?
Florian Höhne: That is a very interesting point, i would love to discuss further about. As far as i understood you - and please correct me if i am wrong - Husserls phenomenology of the I and the body and references to a rather platonic ontology lay the grounds for this suggestion. Personally, i found parts of the i-thou-philosophy convincing, because they allow thinking the encounter with the other as the encounter with a unity in which the split of res extensa and res cogitans need not be implied. This resonates with younger ideas of emobdiment. If the body of the other is not just the means/image by which one is pointed to the I beyond in analogue to one’s own I but rather the always already and always embodied I, the surface appearance is not jsut appearance but in itself the reality of the other. Is a what you call iconoclastic empahty still compatible with this more responsory kind of thinking? I have such a sympathy for those non-platonic approach because platonic approaches might tend to be in the danger of making the idea beyond the appearance more important than the concrete thou…
Florian Höhne: Yes, very plausibel.
Benedikt Friedrich: Would this lead into some kind of media fasting? Or what else do you mean by “imposing distance”? Does this require a certain consciousness for this phenomenon and its problematic implications in our times? I can imagine this is very difficult to cultivate within a generation growing up within this flow of images…
Benedikt Friedrich: Do you think this is some kind of a contrast reaction to the democratizing effekt of deep learning? Because, from what I have understood, trademarks seem to become some kind of epistemic authorities. The quite challenging question then might be: Is there any chance to not let those become completely indiscriminate?
Benedikt Friedrich: thank you for pointing out the massive complexity of deep learning! I have the impression we still often can’t imagine how deep deep learning actually is and that it’s qualitatively more than the processing of data through a linear operation that processes data from an input interface to a rather predictable output. But what follows (the several layers and the net they are building) gives an impression of how self-veiling such algorithms are.
Benedikt Friedrich: I have the impression this is a big game changer: Editing and producing high quality “deep fakes” has been something only a few were able to produce. You need a lot of equipment, processing power, expensive software products etc. But the development of easy to use tools that are driven by deep learning and therefore automated technology might turn this technique into a thing that will be implemented in — for example — next-generation smartphones!
?
Gotlind Ulshoefer: There are already deep fake tools available for smartphones - so I do agree that this development is a big challenge also how to detect the “reality/truth” of an image or a film.