Deep Fake Technology and Privacy: a Comparison of Legal Remedies
- kjalas
- May 26, 2023
- 5 min read
by Kanerva Jalas
With the development of Artificial Intelligence (AI) systems such as ChatGPT, concerns have arisen from what can be considered genuine in the form of it being ‘human’ produced, and how can we tell apart AI-generated content from the former. Many AI systems, more specifically those utilising Natural Language Processing (NLP) can learn from human produced text and speech, and recreate novel creations on the basis of its input. However, the launch of ChatGPT and other NLP systems is not the first instance where the line between what is perceived ‘real’ or human-produced and machine-generated are blurred. In 2017, a new form of technology introduced the possibility to manipulate audio and visual to create ‘clones’ of individuals. The first instances of the use of technology were posted by a user on the social media site Reddit, where the faces of celebrities were swapped with the faces of porn workers in sexually explicit videos. Since then, the technology has been used to create thousands of fake videos and pictures, some depicting narratives contrary to the ideologies presented by the victims of the fake content. Such an instance took place in 2018, where a picture of the speech by Emma Gonzales, a survivor of the Douglas high school shooting, was edited to depict her tearing apart the US constitution. Later, deepfakes have been created of the heads of government for humorous but also including offending or harmful rhetorics.
Deepfakes are created by machine learning, by providing an algorithm with relevant input (in most cases this would include videos, pictures and/or audio clips of the person which is desired to be impersonated), on the basis of which the algorithm learns to recreate a realistic output where the deepfaked person seemingly is saying or doing something which in reality has not happened. Where the consent of the individual of whom the deepfake is created has not been acquired, the privacy of that person could be violated. The AI systems are often neural networks, which simulate the functioning of a human brain. The network consists of nodes which are controlled by different numerical standards. The input trains the network to produce accurate results. However, a more advanced model for the creation of deepfakes was developed: generative adversarial networks (GANs) are neural networks which rely on two different networks. First, a generator is given a data sample, on the basis of which it learns and aims to produce a new datasample. Second, another network defined as the discriminator analyses data sample produced by the generator and informs the latter on its success rate. This process can be repeated until the output of the generator is of a satisfactory quality.
The development of deepfake technology has raised questions regarding its lawfulness and relationship with the right of privacy of individuals. The right of privacy can be defined as the right to be ‘let alone’. It is considered a fundamental human right in many jurisdictions, containing codifications in Article 8 of the European Convention of Human Rights, Article 7 of the Charter of Fundamental Rights of the European Union, and the Ninth Amendment of the US Constitution. In many jurisdictions adhering to common law legislation, it may be possible to sue the creator as well as the publisher of a deepfake on the basis of a ‘false light’ tort. This action requires provision of proof that the published deepfake paints the victim in a negative light to an ‘average person’. Connected to this a victim can invoke a claim for defamation, where the content presents false information of a person which could prove harmful to their reputation. Reliefs for defamation can include an injunction to stop sharing the content as well as monetary damages. Another potential route for legal remedies in this connection is relying on an individual’s intellectual property rights. To produce a deepfake, it is required that the neural network is trained with sample data, including pictures or videos of the victim. Under most copyright protection regimes, the ownership over videos or pictures fall under creative work, and therefore belong to the author of the data sample. Economic rights of copyright reserve the possibility for reproduction of the work solely for the author. However, whether copyright protection is relevant for the creation of deepfakes has been subject to a diverse discussion without a conclusive answer.
According to the World Intellectual Property Organisation, regulation of deepfakes should rely on more protection of data instead of copyright; instead, copyright protection should extend to the creator of the deepfake. On this basis, in the EU, individuals can invoke Article 5 (1) (d) of the General Data Protection Regulation (GDPR), which enshrines protection for the accuracy of personal data. The Article furthermore states that ‘every reasonable step must be taken to ensure that personal data that are inaccurate are erased or rectified without delay’. Additionally, even where it could be considered that the data is not inaccurate, EU citizens can rely on Article 17 GDPR enshrining the right to be forgotten, entailing the possibility to for data subjects to request for the erasure of their personal data i) where the data is no longer necessary or relevant, or ii) where the subject would like to withdraw their consent and where there is no legal ground. Finally, where the deepfake is of sexual nature, a claim can be possible for sexual harassment or a violation of the sexual integrity of the individual depending on the criminal code of each jurisdiction.
The development of deepfake technology can be frightening due to the possibilities to create misleading visual or audio-based content. However, the rights of privacy and data protection remain fundamental in our society, and will continue to counterbalance the constant evolving of new technologies. To ensure safe and comfortable usage of the internet and social media, it is instrumental that we are aware of our position and legal rights in our digital society.
Sources:
Regulation (EU 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC.
S Ortiz, ‘What is ChatGPR and why does it matter? Here’s what you need to know’, 7 April 2023, ZDNET: https://www.zdnet.com/article/what-is-chatgpt-and-why-does-it-matter-heres-everything-you-need-to-know/, accessed on the 8th of April 2023.
I Sample, ‘AI-generated fake videos are becoming more common (and convincing). Here’s why we should be worried’, 13 January 2020, the Guardian: https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them, accessed on the 8th of April 2023.
B Chesney and D Citron, ‘Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security’ (2019) 107 California Law Review 1753.
S Nair, Privacy in the deepfake world: impact and regulation, 21 January 2022, Law and Tech Times: https://lawandtechtimes.wordpress.com/2022/01/21/privacy-in-the-deepfake-world-impact-and-regulation/, accessed on the 8th of April 2023.
N Schmidt, ‘Privacy law and resolving ‘deepfakes’ online’, 30 January 2019, iapp: https://iapp.org/news/a/privacy-law-and-resolving-deepfakes-online/, accessed on the 9th of April 2023.
P Tseng, ‘What Can The Law Do About ‘Deepfake’?’, March 2018, McMillan: https://mcmillan.ca/insights/what-can-the-law-do-about-deepfake/, accessed on the 8th of April 2023.
WIPO, ‘WIPO Conversation on Intellectual Property (IP) and Artificial Intelligence (AI)’ WIPO/IP/AI/2GE/20/1.
Comments