Aditya Mehrotra is a Student at Symbiosis Law School, Pune 

Introduction

The phrase “deepfake” refers to the technology behind the creation of such material, namely deep learning algorithms, which can solve issues by “teaching” themselves using massive datasets. Therefore, Artificial intelligence (AI) is used in deepfakes to create entirely new video or audio in order to simulate an event that did not take place.

Cristina López, a senior analyst at Graphika, a company that studies the transmission of data across digital networks, defined “deepfake” as “footage generated by a computer that has been trained through countless existing images.”

Images created using deepfakes technology aren’t your average hoax. False visuals of Donald Trump’s arrest that went viral before his indictment were created by AI, but they are not deepfakes. Similarly, an AI-generated Pope wearing a puffer jacket does not constitute a deepfake. A deepfake may be distinguished by the presence of human intervention.

In the case of deepfakes, the user has no control over the process itself and is only given the opportunity to say “yes” or “no” to the final product after it has been generated. In a Recent event in India, The BJP launched a pair of videos featuring Manoj Tiwari in February of 2020. There was just one video, but it was made in both English and Spanish to appeal to more than one set of voters. The imminent threat presented by this technology can now be addressed most effectively by combining the technology and the legislation. Recently, deepfake technology has been used to make fake pornographic movies and political campaigns, which has raised concerns about privacy, identity theft, the legitimacy of elections, and the reliability of information shared on social media. 

The concept of what makes a deepfake is subject to change since new approaches and applications are frequently found as a result of the fast development of technology. This article uses the term “deepfake” to refer to a specific kind of fabricated media that uses machine learning to create convincing digital representations of real-world people, places, or things.

Relevance of the Applicable Legal Provisions

In India, Deepfake-related crimes fall beyond the purview of any one legislation. The government hasn’t done enough to address this rising criminality. India’s then IT Minister RS Prasad got Deepfake and Fake news mixed up during a parliamentary question and answer session. The minister’s proposals showed that the seriousness of the dangers presented by the misuse of this technology was not being taken into consideration.

The phrase “deepfake” describes the practice of using AI to create or alter digital information (such as photos, videos, or audio files) that gives the impression that it was created by a human. Currently, deepfakes are not addressed by any Indian legislation. However, certain rules already in place may be adapted to address the problems caused by deepfakes.

The first is that impersonation fraud committed via a computer is a crime under Section 66D of the Information Technology Act of 2000. This rule covers deepfakes that are meant to fool people into thinking they are someone else.

Second, Unauthorized access to computer systems, such as that used to produce deepfakes, is addressed under Section 43 of the Information Technology Act of 2000.

Third, forgery with fraudulent purpose is covered under Indian Penal Code section 469, which may be used to deepfakes. The Ministry of Electronics and Information Technology also released a proposed rule change titled “The Information Technology [Intermediaries Guidelines (Amendment) Rules] 2018, which would require intermediaries to delete or prevent access to illegal information within 24 hours of receiving a complaint”. This may be used to get rid of any deepfakes that are illegal in India. Despite these safeguards, however, India’s regulation on deepfakes is inadequate and has to be expanded.

Fourth, Infractions of privacy are punishable under Section 66E of the IT Act. The provisions 67, 67A, and 67B of this law also make it illegal to send or publish sexually explicit material, images of children engaging in sexual activity, or both through electronic means.

When a Deepfake video or audio of a person is created in which he seems to express anything detrimental to that person’s reputation, the provision for criminal defamation may be enforced under the Indian Penal Code as laid down in Sections 499 and 500. In this grouping, you could find a hoax video in which the speaker makes disturbing claims. Sunilakhya v. HM Jadwet established that criminal defamation requires the purpose to harm another person’s reputation since the Defamation law applies to visual representations of deepfakes as well.

Identity Theft and Deep Fakes

Identity theft occurs when one person fraudulently obtains another’s personal identifying information and then uses it to his or her advantage, such as by opening bank accounts in the victim’s name or applying for government benefits. According to a Recent U.S. news article, the CEO of an unidentified UK-based energy firm was duped into thinking he was speaking with his German parent company’s CEO, who instructed him to wire €220,000 ($243,000) to a Hungarian supplier right once and hence Deepfaked voice was convincing enough.

Furthermore, The story of journalist Rana Ayub shows that our laws do not adequately protect victims of revenge porn. In the case of Rana, Someone had edited a pornographic video to make it seem like Rana Ayyub was the star. This vicious assault occurred not long after she began advocating for the rape victim in Kathua. A girl of eight years old was raped for many days and then killed. Jammu area BJP officials have led a campaign in favor of the accused, arguing that they were unfairly persecuted for their religious affiliation. When Rana Ayyub spoke out against this, she was subjected to online harassment and hate speech, which she combated by using Twitter. Several photoshopped tweets featuring Rana were circulating the microblogging site, and she saw that she was a part of a few of them.

The prevalence of Deepfakes has skyrocketed in recent years, prompting a number of governments to pass legislation designed to curb their abuse. For instance, the “Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2019” was proposed in the United States House of Representatives, and a similar measure was proposed in California, both with the intent of regulating the use of Deepfakes. In contrast, India does not have any Deepfake-specific laws; however, one may seek protection under other regulations currently in effect in India, such as the Copyright Act, 1957 for copyright infringement due to Deepfakes, section 499 of the Indian Penal Code for defamation through Deepfakes, or the Information and Technology Act of 2000. These laws shield us from the repercussions of Deepfakes, but they don’t address the practice of creating them, hence new legislation is required.

Modifications to the Content and Intermediate Responsibility

Individuals’ right to privacy is very vulnerable to invasion over the internet. This places a heavy on online service providers to protect the privacy of their customers’ sensitive data. Due to their potentially disastrous impact on spreading disinformation, deepfakes were a source of great pressure on social media intermediaries in the run-up to the 2016 US Elections.

The ‘Deepfake Detection Challenge’ was announced in 2019 by Facebook, Amazon, and Microsoft to encourage the creation of reliable technologies to identify Deepfakes. The winning team in the competition only managed to create a system that is only 65.18 percent accurate at detecting deepfakes. Therefore, social media platforms like Facebook, Reddit, and Twitter have implemented anti-deepfake regulations.

In accordance with Facebook’s policy, ‘independent third-party fact-checkers’ working with more than 50 partners and 40 languages are required to conduct fact checks. Due of the ineffectiveness of ‘fact-checking’ un combating deepfakes, such measures are too restrictive. While fact-checking may help curb the spread of misinformation, it won’t help with the more nuanced problems caused by deepfakes like privacy and IP violations.

Section 79 of the Information Technology Act of 2000 mandates that, upon receiving actual knowledge or a court order, Indian intermediaries must delete illegal information, as discussed in Myspace Inc v. Super Cassettes Industries Ltd. The vast majority of content moderators nowadays use a hybrid system combining artificial intelligence (AI) and human reviewers. In the case of deepfakes, it may be impossible to comply with the updated Intermediary Rules since neither human assessors nor technology are capable of recognizing them with 100% accuracy.

Conclusion and Suggestions

The present cybercrime law in India does not go far enough to address the problem of deepfakes. Since the Information Technology Act, 2000 does not include any rules directly addressing AI, ML, or deepfakes, effective regulation of their usage is problematic. If crimes committed using deepfakes are to be effectively regulated, it may be required to revise the IT Act of 2000 to include sections addressing the use of deepfakes and the consequences for their abuse. Legal safeguards for people whose photos or likenesses are exploited without their permission might be enhanced, and penalties for those who make or disseminate deepfakes for malicious reasons could be raised.

It’s also crucial to remember that developing and deploying deepfakes is a worldwide problem, one that will likely call for multilateral efforts to govern their use and forestall privacy invasions. The best way to protect yourself and your business against deepfakes in the meanwhile is to be vigilant about checking the legitimacy of the content you see online. In the meantime, governments may adopt the measures outlined below:

  1. To begin, there is the censorship technique of blocking middlemen and publishers from disseminating false information to the public.
  2. The second strategy, which is more severe, is to hold accountable those who create or spread disinformation.
  3. According to Sections 69-A and 79 of the material Technology Act, 2000, the third strategy relies on the regulation of intermediaries to ensure the rapid removal of false material from their platforms.

Share this post