Mallika Patnaik is a student of NLUO

 

DeepFakes, or Deep Learning, is the creation of distorted pictures and media by using Artificial Intelligence (AI) to synthesize algorithms and generate malicious and fake content. In the ever-evolving area of AI, DeepFakes are in the limelight as they are being used with malice to defame and tarnish the reputation of well-known people as well as infringing the copyright of creators. They play an important role in creating digitally generated voices for children with disabilities, as done by Cereproc, and they have  been used in the entertainment sector as well. However, with AI becoming increasingly popular and easy to access, malicious use of deepfakes is wreaking havoc.

Thanks to Generative Adversarial Network (GAN), fake videos can be created out of thin air using this machine learning AI technique that produces images without being their exact copies. It can generate pictures that do not exist simply by studying facial features and creating new portraits. It is being wielded as a tool to harass famous personalities. DeepFakes  blends into myriads of areas of law such as data protection, breach of privacy, safeguards under the right to freedom of speech and expression, and last but not the least, copyright infringement.

In this article, the author aims to analyse the enhancements of AI and further discusses the need for a regulatory framework to keep checks and balances on DeepFakes.

Understanding DeepFakes & Its Interplay with Intellectual Property

Recently, there have been instances wherein celebrities and politicians have been victims of DeepFakes. They have been subjected to revenge pornography, spreading  misinformation and enticement of hatred.

Since DeepFakes commonly involve the use of various traits (facial and others) of celebrities and famous personalities, they are strongly intertwined with the concept of personality rights. In the case of Titan Industries Ltd. v. M/s. Rajkumar Jewellers,[1]  the Delhi High Court laid down the elements explaining celebrity rights and further held that every celebrity owns an enforceable right against his persona as a human being. The Delhi High Court,in the case of Amitabh Bachchan v.  Rajat Nagi,[2] granted protection to the plaintiff, safeguarding the use of his voice, face,  unique characteristics, and restrained the defendants from misusing his name. While determining whether a creation in question is infringing or not, there are multiple things to be taken into consideration.

In India, there are two ways to  determine whether a DeepFake is infringing. Article 19 of The Constitution of India enforces an individual’s right to speech and expression imposing reasonable restrictions, whereas the Indian Copyright Act1957 (the Act) allows fair and bonafide use of copyrighted work, but there is no exact definition. These provisions are at a crossroads to each other, and it is against this backdrop that a possible demarcation has to be made between the two.

Regulating DeepFakes: Need of The Hour

It is a settled position that using any celebrity’s name or face for publicity, parody, advertisement, satire, academics etc. would fall under the purview of Article 19 and thus would not be an infringement of the right to their privacy. However, this line of thought dilutes any absolute right of celebrities over their persona and reputation. This might prove to be problematic when the line of distinction between ‘fair use’ and ‘malice’ is blurry, as is the case in many advertisements and campaigns.

The position of law in India is not equipped sufficiently to deal with DeepFakes. Taking Section 52 of the Act into consideration, it does not particularly define what ‘fair dealing’ is. The Section contains an exhaustive list of infringing works that segregate a bona fide work from a mala fide one. Indian copyright jurisprudence is very rigid and inflexible regarding ‘fair use’. DeepFakes are not included as an exempted category against infringement. Owing to the same, handling DeepFakes would be a piece of cake. However, this would also mean jeopardizing any work created with a bonafide intention for entertainment. Hence, there needs to be a change in the Indian stance for accommodating those DeepFakes created with a bonafide intention.

In contrast, the United States (US) has a rather wide doctrine of fair use. They follow a four-fold test that examines the purpose, the nature of the use, the portion of original work used, and balancing it out with the effect the content has on the market base. The US Supreme Court, in a landmark decision, introduced the theory of ‘transformative use, which would easily accommodate any creation as it refers to the creation of content infused with a new meaning and expression. Parodists use this as a defence to protect their creation under the blanket of transformative use. Any restraint on these works would negate the freedom of expression in the US context. Malicious content would, therefore, be protected in the guise of transformative use.

If seen from a purely Intellectual Property perspective, DeepFakes can be restrained by arguing that every celebrity has his own persona, and it is protected from infringement. Any use would come under fair  or transformative use, depending on the place of legislation.

To protect personality rights from being misused, we have to take the help of data protection safety and invoke our fundamental right against privacy. The Supreme Court in K S Puttuswamy v. Union of India declared the right to privacy as a fundamental right. One of the reasons why AI-induced DeepFakes are so critical is because they are published on social media platforms, thereby reaching millions of people within a span of minutes. The only way to tackle this is to involve the active participation of intermediaries to find defamatory and malicious content and take them down. But in reality, it would prove to be a herculean task. Intermediaries such as Instagram and Facebook cannot possibly find every deepfake media and take them down, it would be similar to finding a needle in a haystack.

In the event of copyright infringement, the Court in Myspace Inc. v Super Cassettes Industries Ltd,[3] held that intermediaries have to take down infringing content at the behest of private parties. A court order directing them to do so is not necessary. This adds more pressure to social media platforms in regulating DeepFake media.

A possible way to tackle this would be to generate algorithms that would recognize infringing content on the face of it and report it automatically. This is a big challenge as the accuracy of intermediaries in recognizing infringing content is far from perfect. While they have their own  community guidelines, newer AI technologies can slide by easily as the existing algorithm fails to detect anomalies. Intermediaries can set up a fast-track process to fast-check and report malicious content and flag them. They should set up Censorship Boards that fish out DeepFakes. This would also help the movie industry to keep a check on their AI-created promotions and advertisements.

Conclusion

The integration of AI into the entertainment business is limitless. Hence, in countries like India and the US, where the right to freedom of speech and expression is highly regarded and ferociously protected, it is tricky to row through the murky waters of what is infringing and what is not. The Courts, in upcoming years, have to solve the grappling question of law on DeepFakes as the barrier between real life and AI gets thinner. For the present times, it is wise to look at DeepFakes from the angle of data protection and privacy to get a clearer answer of whether it is infringing or not.


[1] Titan Industries Ltd v Ramkumar Jewellers 2012 (50) PTC 486 (Del).

[2] Amitabh Bachchan v. Rajat Nagi 2022 SCC OnLine Del 4110.

[3] Myspace Inc v. Super Cassettes Industries Ltd SCC Online Del 6382 (2016).

Share this post