
Alliance Center for
Intellectual Property Rights
NAVIGATING THE DEEP: ANALYSING THE IMPACT OF DEEP FAKES ON PERSONALITY RIGHTS
March 1, 2024
*Ms. Pragathi U Bhat
INTRODUCTION
The increase in adoption of Artificial Intelligence (AI) and techniques of generative AI to produce creative works has raised questions on the viability of recognising AI as an author and thus granting copyright protection to the AI tool itself for the work created or the person who caused such work to be created. In any discussion of works created by AI, it is pertinent to address the implications of deepfakes on the protection of personality rights. This blog aims to analyse the impact of deepfakes generated with the use of AI on personality rights in India and the limited protection of personality rights in India.
Deepfakes that is deep learning and fake, refer to a form of synthetic digital content created using AI tools where two or more forms of media are visually synthesised. In other words, deepfakes are visual media where a person’s face is overlaid on another body or where the audio or voices of an original video are modified to result in a realistic and humanistic output. Deepfakes can be used for both harmless or noble causes and detrimental reasons. One of the examples of deepfakes being used for a good cause is the use of AI tools to produce deepfakes of the English footballer, David Beckham speaking in various languages from the original version spoken by him in English in a campaign to raise awareness on Malaria. On the other hand, deepfakes can also be used for malignant purposes such as the recent video that surfaced on social media websites, where the face of actress Rashmika Mandanna was overlaid on the body of a British- Indian Model.
CREATION AND IMPACT OF DEEPFAKES
Deepfakes can be easily created with a Generative Adversarial Network (GAN), its access to the internet and social media sites has been flooded with an overabundance of deepfake content. Owing to the fact that this technology can be used to create entirely new and original media with superimposed images or to manipulate existing media, and it has been increasingly used across social media sites. In analysing the harmful ways in which generative AI can be used to create deep fakes, it is necessary to acknowledge the insidious gender disparity. A 2018 report by Deeptrace, Amsterdam, revealed that 90% of deepfake videos used for pornographic purposes are made using the images of women as opposed to those of men. It was suggested that these videos were used as revenge pornography, non-consensual pornography or as a part of other illegal activities. Apart from this, deep fakes can also be used to instigate political outrage or political disputes. For instance, in 2020 numerous deep fake videos of the then-presidential candidate, Joe Biden, were circulated online which showed him falling asleep in interviews or slurring words with the intention of spreading rumours about the deterioration of his mental faculties. In 2022, a deep fake video of Ukrainian president Volodymyr Zelenskyy asking the people to lay down their weapons and surrender to the Russian invasion was circulated. The video gained massive traction on Russian social media owing to which the Ukrainian president had to issue a clarification to de-authenticate the video. By virtue of their misleading and deceptive authenticity, deepfakes are increasingly used in financial scams, spreading misinformation, pornography, blackmail or extortion and other unlawful activities.
In the virtue of the fact that media with celebrities is consumed on high demand, they usually become the target of such content. The increasing use of deepfakes across the world necessitates an analysis of deepfakes produced with the use of Generative AI and its legality with respect to privacy laws, data protection laws and information protection laws. But in the case of deepfakes of celebrities, an examination of the relevant personality rights in that jurisdiction becomes essential. It is also pertinent to note that deepfakes of celebrities can be intended purely for the purpose of entertainment, for promotion or to raise awareness. For example, the resurrection of the late Paul Walker in the Fast and Furious movies after his death was using deepfake and AI technology. On the other hand, this technology can also be used to make harmful content that would violate the privacy and personality rights of the celebrity. This includes various examples such as the recent videos of actress Rashmika Mandanna, advertisements, and endorsements with deepfakes of celebrities such as Tom Hanks, Taylor Swift, Kylie Jenner that are made with the intention to mislead the public.
LEGAL CHALLENGES AND ENFORCEMENT
Considering the various threats posed by unacceptable use of AI to create detrimental deepfakes, various nations across the world have established laws or regulations to monitor the creation and circulation of deepfakes. In the United States of America, despite the fact that there is no federal legislation regulating the creation and dissemination of deepfakes, the states have enacted various legislations. Texas and California enacted legislations in 2019 to prohibit the use of deepfakes to influence or manipulate the opinion of voters in the upcoming presidential elections. States such as Virginia, Georgia, California, and New York enacted legislations to ban the use of deepfakes for non- consensual pornography and other illegal uses. The United Kingdom is on its way to criminalise deepfakes, especially for non-consensual pornography but there is no specific legislation as of yet. South Korea has penalised the creation and dissemination deepfakes with fines and imprisonment. Interestingly, China has the strictest regulations regarding deepfakes. The country has implemented specific policies such as ban on certain deepfakes and the use of watermarks or clear labels to indicate that the media is AI generated or a deepfake.
In India, there are no specific regulations that address the harmful uses of AI especially with respect to deepfakes. The provisions of the Information Technology Act, 2000 have been used to deal with cases of deepfakes that violate the rights of a person or the public. Section 66E of the Act is invoked to punish the breach of privacy through the dissemination of media containing the private areas of a person without their consent. Sections 67, 67A and 67B provide for punishment for dissemination of media that is obscene or sexually explicit or contains sensitive images of children. The Ministry of Electronics and Information Technology has also explicitly laid down that in case of potentially harmful deepfakes, social media websites should perform the required due diligence and either take down the content or comply with the guidelines of IT Rules, 2021. Apart from the legislations by these countries, social media intermediaries such as Facebook, Instagram, X, etc., have their specific guidelines to deal with deepfakes such as recognising or labelling the media as deepfakes, but they do not have specific regulations for the taking down of such unless it is reported by the users.
Although the legislations and guidelines in India and in other countries establish a loose framework for action that can be taken in case of harmful deepfakes that violate the privacy of a person or spread misinformation, none of the regulations or policies specifically address the violation of personality rights. Personality rights are a branch of intellectual property law and are widely accepted as an extension of copyright. Personality rights refer to the right to privacy and right to publicity that are derived from the personhood of a person. The discussion of personality rights is usually with respect to a celebrity or a famous person’s right to publicise themselves or commercially use their professional reputation. Although deepfakes violate the dignity, right to privacy and the autonomy of a non-celebrity, deepfakes of celebrities violate the rights of a non- celebrity as well as their celebrity personality rights and their professional reputation. Therefore, it is necessary to understand the impact of legal regulations on deepfakes especially with respect to personality rights.
While countries such as the US and the UK have an established system of protection of personality rights as a form of intellectual property rights along with their privacy laws, the protection of personality rights is still uncharted territory for the Indian Intellectual Property Laws on account of there being no statutory definition or provisions for the same. There are also very few judicial precedents that have established the position of law as to the protection of personality rights in India. In ‘Amitabh Bachchan v. Rajat Nagi’, the Court granted an injunction restricting the defendant from using the face, voice, and other personality traits of the plaintiff for commercial gain and therefore laid down the foundation for protection of personality rights in India. In the case of ‘Phoolan Devi v. Amarnath Kapoor’, the Court stated that the protection of their reputation and image was a “constitutional right” of celebrities. The case of ‘Anil Kapoor v. Simply Life’ paved the way for protection of personality rights especially in relation to content created with AI. The Delhi High Court recognised the personality rights of the plaintiff and granted an injunction to restrain the use of the voice, face, and other likeness of the plaintiff to make commercial gain. Even the phrase “jhakas” popularised by the actor has been recognised as a part of his personality rights. More importantly, the court also restrained websites and intermediaries from using the actor’s personality to disseminate AI generated content and deepfakes. In the case of ‘Rashmika Mandanna’, she did not address the personality rights of the actress. The Court invoked the provisions of the Information Technology Act to punish the accused in the case. This case highlighted the urgent need for regulation of deepfakes by establishing statutory provisions and the recognition of personality rights in the country.
Another arena of deep fakes that remains unanswered due to the lack of regulations and legislations on personality rights and media created through AI, is the postmortem or posthumous protection of personality rights. When deepfakes of celebrities alive in the present day can be created, it is not impossible for a deepfake of a dead celebrity to be created. To circle back to the example of the posthumous inclusion of Paul Walker in the Fast and Furious movies after his death was possible because of AI and deepfakes. But these can also be used to harm the reputation of the celebrity or to make wrongful commercial gain since celebrity is not in a position to protect these rights. The question that remains is whether the personality rights are still vested in the celebrity after their death and can the same be protected by their legal representatives or heirs? This position was settled in India in the case of ‘Krishna Kishore Singh v. Sarla Saraogi’, where the Delhi High Court refused to grant an injunction to stop the OTT release of the movie- “Nyay: The Justice” based on the life and death of the actor Sushant Singh Rajput. The Court held that the rights to publicity and privacy and personality rights of the actor died with his death and could not be enforced by his heirs or legal representatives. It is possible for this precedent to be extended to cases where a deepfake violating the personality rights of a dead celebrity is the issue. Therefore, the question of postmortem protection of celebrity rights which is unanswered in other jurisdictions across the world, seems to be settled in India.
Conclusion
In a time where AI and AI generated content are taking the world by storm it is the need of the hour to establish adequate regulations and policies to monitor the use of AI and its output into the world. While regulating the use of AI, it is also necessary to analyse the implications of granting copyright to works generated by AI and the impact of such works on personality rights. The question of protection of personality rights with respect to works created through AI such as deepfakes becomes easier in countries such as the USA where there is an established framework of protection of personality rights. But in countries such as India where personality rights are not clarified, the primary task is to establish the position of law on personality rights. It is also significantly important for social media intermediaries to put in place mechanisms to identify deepfakes circulated on their platforms. Such videos should contain a disclaimer or label that they are AI generated and should be immediately taken down if they violate the right to privacy of a non- celebrity or personality rights of a celebrity. Therefore, the liabilities and responsibilities of social media intermediaries in case of circulation of deepfakes should be sufficiently clarified.
References:
- Vejay Lalla, Adline Mitrani & Zach Harned, Artificial Intelligence: Deepfakes in the Entertainment Industry, WIPO MAGAZINE (June 2022), available at https://www.wipo.int/wipo_magazine/en/2022/02/article_0003.html
- Deeptrace, The State of Deepfakes, Landscapes, Threats and Impact (2019), available at https://regmedia.co.uk/2019/10/08/deepfake_report.pdf
- Vikrant Rana, Anuradha Gandhi & Rachita Thakur, Deepfakes and Breach of Personal Data- A Bigger Picture, LIVELAW, (Nov. 24, 2023, 12:16 PM) available at https://www.livelaw.in/law-firms/law-firm-articles-/deepfakes-personal-data-artificial-intelligence-machine-learning-ministry-of-electronics-and-information-technology-information-technology-act-242916?infinitescroll=1
- Dustin Caranahan, Faked Videos Shore Up False Beliefs About Biden’s mental Health, THE CONVERSATION, (Sept. 16, 2020, 7:58 PM) available at https://theconversation.com/faked-videos-shore-up-false-beliefs-about-bidens-mental-health-145975
- Bobby Allyn, Deepfake video of Zelenskyy could be ‘tip of the iceberg’ in info war, experts warn, NPR, (Mar. 16, 2022, 8:26 PM) available at https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia
- Caroline Quirk, The High Stakes of Deepfakes: The Growing Necessity of Federal Legislation to Regulate This Rapidly Evolving Technology, PLJ, (June 19, 2023) available at https://legaljournal.princeton.edu/the-high-stakes-of-deepfakes-the-growing-necessity-of-federal-legislation-to-regulate-this-rapidly-evolving-technology/
- Rina Chandran, Bollywood Star or Deepfake? AI floods social media in Asia, CONTEXT, (Dec. 14, 2023) available at https://www.context.news/ai/bollywood-star-or-deepfake-ai-floods-social-media-in-asia
- Kalpana Tyagi, Deepfakes, Copyright and Personality Rights an Inter- Disciplinary Perspective, ILEC, 205, 191-210 (2023).
- Personality Rights, CASE WESTERN RESERVE UNIVERSITY, available at https://lawresearchguides.cwru.edu/IP/personality-rights
- Elizabeth F Judge & Amir M Korhani, Deepfakes, Counterfeits, and Personality, ALBERTA LAW REVIEW, 3, 1-52, (2021).
- Amitabh Bachchan v. Rajat Nagi, 2022 SCC OnLine Del 4110
- Phoolan Devi v. Amarnath Kapoor, (1995) 57 DLT 154
- Anil Kapoor v. Simply Life, CS (COMM) 652/2023
- PTI, Rashmika Mandanna deepfake video: Govt asks social media firms to identify, remove deepfakes within 36 hrs once reported, DECCAN HERALD, (Nov.7, 2023 5:20 PM) available at https://www.deccanherald.com/india/rashmika-mandanna-deepfake-video-govt-asks-social-media-firms-to-identify-remove-deepfakes-within-36-hrs-once-reported-2760434
- Krishna Kishore Singh v. Sarla Saraogi, 2023 SCC OnLine Del 3997
Author:
* Ms. Pragathi U Bhat
4th Year BA LLB Student,
Faculty of Law, PES University, Bengaluru
Disclaimer: The opinions expressed in the article are the personal opinions of the author. The facts and opinions appearing in the article do not reflect the views of the Alliance Centre for Intellectual Property Rights (ACIPR) and the Centre does not assume any responsibility or liability for the same.