White Paper: Deep Fakery
This white paper is an outcome of IPAM’s fall 2019 exploratory workshop, Deep Fakery: Mathematical, Cryptographic, Social, and Legal Perspectives.
Represented at the workshop were members of the mathematics, machine learning, cryptography, philosophy, social science, legal, and policy communities. Discussion at the workshop focused on the impact of deep fakery and how to respond to it. The opinions expressed in this white paper represent those of the individuals involved, and not of their organizations or of the Institute for Pure and Applied Mathematics.
“Deep fake” technology represents a substantial advance on earlier technologies of image, audio, and video manipulation like photoshopping. It emerged from the recent deep learning revolution, especially the development of generative adversarial networks. It enables the efficient, computer-assisted production of highly believable audio and video in which real people appear to be saying things they never said and doing things they never did.
The application of deep learning to audio and video synthesis is not exclusively harmful; beneficial use cases arise in artistic, scientific, or even therapeutic contexts. We therefore propose the term deep fakery for a specific socio-technical configuration in which deep fake technology is used for malicious or anti-social purposes.
More broadly, the term deep fakery includes any use of this technology to produce deceptive, apparently authentic representations, whether text, images, audio, video, or online profiles. The term therefore encompasses not only fabricated videos but also fabricated online actions.
Fundamentally, deep fakery is a technology of human augmentation that enhances our capacity to produce alternate realities and pass them off as real.
When referring to a specific instance of deep fakery, we will use the popular term “deep fake.” We emphasize, however, that we are only focusing on anti-social or malicious applications. The reader should parse “deep fake” as “malicious deep fake.”
Combating deep fakery will be an arms race. On the one side stand those who want to unmask malicious deep fakes and prevent their spread. On the other side are actors who want to make malicious deep fakes more difficult to detect and contain.
What is a reasonable goal in this battle? What would victory look like? And who should be involved?
By analogy with many societal problems, e.g. crime, we propose two goals:
(1) Reduce the incidence of deep fakery and the harms it causes to a manageable level; and
(2) Provide redress for individuals, groups, and organizations harmed by the persisting level of deep fakery, and contain the adverse societal impacts.
We emphasize at the outset that deep fakery is a socio-technical problem that cuts across many disciplines. As our workshop confirmed, it is not merely a challenge for the machine learning community. It demands a transdisciplinary research agenda with input from cryptographers, social scientists, and legal and policy experts.
We have organized this white paper into four sections: (1) A taxonomy of the harms caused by deep fakery; (2) an overview of incentives that affect deep fakery; (3) a catalogue of promising research directions needed in the battle against deep fakery; and (4) a brief action plan for researchers and policy-makers.