Shaurya Mishra & Niranjan Chavhan are the students of Maharashtra National Law University Aurangabad

1.     Introduction:

In today’s rapidly advancing tech landscape, deepfake technology poses a significant threat to societal well-being and ethical standards. Essentially, deepfakes involve manipulated videos where a person’s face is seamlessly replaced by a digitally-generated face of another. The word “deepfake” finds its way back to 2017 when a Reddit user coined and used the same on its handle. The accessibility of this technology has proliferated, poised to become even more prevalent with advancing computational capabilities.

A poignant illustration of the potential dangers of deepfake technology unfolded through Channel 4’s “Queen Elizabeth’s alternative Christmas Speech” in 2020. Commencing with the Queen’s customary address, the video took an unexpected turn as she humorously touched upon public crown scandals. The revelation that the Queen in the video was a fabrication stirred public awareness and discourse on the gravity of deepfake technology. Beyond mere misinformation, deepfakes harbour the capacity to influence military morale in warfare scenarios, and they have been employed to tarnish reputations, as evidenced by Sensity’s documentation of 85 thousand deepfake videos aimed at reputation destruction.

As technological advancements pivot towards biometric applications, reliance on facial recognition for security, whether it be for mobile phones or bank accounts and passports, faces a precarious future in light of the rapid evolution of deepfake capabilities. On the other hand, instances of positive deployment of deepfake technology, such as its integration into corporate training videos and entertainment productions like “Sassy Justice,” underscore the dual nature of its impact. This article endeavours to explore the intricate legal dimensions surrounding deepfakes, shedding light on how this burgeoning technology engenders societal harm.

 

2.     Understanding Deepfakes:

The inception of deepfake technology is not entirely novel, with roots traced back to image morphing which has been targeting women over the years. Evolving from this precursor, contemporary deepfake techniques utilize advanced technologies which have been recently very popular such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). GANs employ two neural networks, one is a generator and the other is a discriminator, and both are constantly involved in a competitive evolving process. The generator learns statistical patterns and produces content that is hardly distinguishable from real data. GANs are instrumental in creating lifelike images, videos, and audio, especially in face-swapping scenarios.

On the other hand, VAEs, adopting an encoder-decoder architecture, delve into the inherent distribution of input data. The encoder compresses data into a latent space, and the decoder reconstructs the initial input, ensuring continuity in the latent space during training. VAEs facilitate the creation of novel facial expressions, poses, or speech patterns by manipulating representations within the latent space.

Advanced deepfake techniques often combine GANs and VAEs, synergizing their strengths for more nuanced and realistic outputs. GANs contribute to adversarial training, while VAEs excel in the continuous, smooth manipulation of latent representations. This convergence enhances the sophistication of deepfake technology, posing challenges to the identification of synthetic content.

 

 

 

3.     Offences emanating consequent to rampant growth of Deepfakes

While the deepfake technology in itself does not inherently pose a threat, its exploitation as a prop for perpetrating offences raises significant concerns. The following offenses can be easily facilitated through the deployment of deepfakes:

Intellectual Property Rights (IPR):

The manipulation of existing copyrighted materials by deepfakes and its use in deepfakes may violate a person’s  right to control their image or likeness, necessitating adaptations to copyright laws to encompass the role of AI in content creation.

Identity Manipulation and Falsification:

Illegitimate utilization of deepfakes for identity theft or virtual forgery constitutes a severe offense. Exploiting deepfakes to pilfer identities, make false representations, or induce and entice public perception can inflict damage on a person’s reputation and credibility. Legal action for these offenses can be pursued under Sections 66 and 66-C of the Information Technology Act, 2000 (hereinafter, “IT Act”), and provisions such as Sections 420 and 468 of the Indian Penal Code, 1860 (hereinafter “IPC”).

Dissemination of Misinformation among Masses:

The spreading of false information through deepfakes to undermine governmental entities, incite hatred, or foster disaffection poses a significant societal threat. It can induce chaos, erode public trust, and be leveraged to influence public opinion for desired political outcomes. Deepfakes’ potential to create realistic scams poses a direct threat to consumer protection as well by duping customers into fraudulent schemes. Legal remedies for such offenses include Section 66-F and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2022 (hereinafter, “Amendment Rules, 2022”), under the IT Act. Additionally, invoking Section 121 and Section 124-A of the IPC may be considered.

Spread Hate Speech and Defame people online:

Deepfakes employed for infiltering hate speech and defaming individuals to harm their reputation through fake videos or audio clips is concerning. Legal remedies for these offenses encompass the Amendment Rules, 2022, under the IT Act and Sections 153-A, 153-B, and Section 499 of the IPC.

Impact on Elections:

The usage of deepfakes for spreading misinformation during elections signifies its damaging potential in manipulating political discourse and disrupting the integrity of electoral processes. Legal avenues for addressing these concerns involve Section 66-D and Section 66-F of the IT Act. Additionally, Sections 123(3-A), 123, and 125 of the Representation of the People Act, 1951, along with the Voluntary Code of Ethics for the General Election, 2019, can be invoked to counter election-related threats in India.

Privacy Violation, Obscenity, and Pornography:

The misuse of deepfakes to synthesize fabricated images or video clips depicting individuals engaging in fictitious activities may extend to non-consensual pornography, political propaganda, or misinformation campaigns. Concerns about data privacy and consent arise as personal data is collected for training AI models. Legal frameworks, including initiatives like the Digital Personal Data Protection Act, 2023 must adapt to encompass the unique challenges posed by deepfakes. Legal actions against these offenses can be pursued under Section 66-E, Section 67, Section 67-A and Section 67-B of the IT Act. Additionally, Sections 292 and 294 of the IPC and Sections 13, 14, and 15 of the Protection of Children from Sexual Offences Act, 2012 (POCSO) can be invoked to safeguard women and children from potential violation of their rights.

Authentication of Evidence:

The advent of easily crafted and convincing deepfakes and the potential for individuals to manipulate video and audio evidence adds a layer of complexity to the authentication of evidence and their admissibility. This technological shift raises the stakes, as perpetrators can exploit deepfakes to cast doubt on accusations, making it more challenging to establish the truth. Our judicial systems have to stay ahead of the curve, incorporating innovative strategies to distinguish authentic evidence from manipulated counterparts and to ensure that the pursuit of justice remains steadfast in the face of deceptive digital advancements.

4.     Addressing the Legal Void:

Addressing the challenges posed by deepfake technology requires multifaceted government approaches. Firstly, a censorship strategy could involve limiting public accessibility to fake information through stringent orders to social media intermediaries and publishing houses. Secondly, a punitive approach may hold individuals or organizations accountable for originating or disseminating misinformation. Lastly, an intermediary regulation approach can obligate online intermediaries to swiftly remove misinformation, subjecting them to liability under Sections 69-A and 79 of the IT Act. These measures, coupled with international collaboration, can foster a more resilient and regulated environment against deepfake threats.

In response to the deepfake threat highlighted by Rashmika Mandana’s viral video, Union Minister Rajeev Chandrasekhar appointed a Rule Seven Officer on November 24, 2023. Rule 7 of IT Rules allows individuals to take platforms to court for offenses under the IPC, bypassing safeguards available under Section 79 of the IT Act. Under this regime, the officer will aid citizens in reporting law violations by platforms, focusing on deepfake detection, prevention, reporting, and awareness. Social media platforms were given a seven-day ultimatum to align terms with Indian laws on deepfakes. After streamlining citizen reports, the government will support FIRs against platforms for IT Rules violations, including deepfakes.

5.     Identifying Deepfakes:

In a reality where truth and illusion intertwine, staying informed on evolving detection methods is crucial. Caution should be exercised in navigating the digital landscape. Deepfakes often betray themselves through incongruities in lighting, shadows, and reflections, deviating from reality. Scrutinizing videos for such alterations can unveil their artificial nature. Unnatural audio qualities, like inconsistent background noise or voice pitch, may expose manipulation. Artificial movements, straying from genuine human behaviour, signal deepfakes, manifesting as odd facial expressions or unnatural gestures. Various tools have been developed to aid in the identification of deepfakes and leveraging these tools can help in deepfake detection. Microsoft’s Video Authenticator, for instance, can detect infirmities such as blending boundaries or grayscale elements not easily visible to the human eye. Facebook’s Reverse Engineering tool is designed to identify fingerprints left behind by an AI model.

6.     Conclusion:

Ensuring a resilient legal framework requires a nuanced equilibrium between safeguarding individuals and upholding freedom of expression. As the legal landscape confronts deepfake challenges, the imperative to adapt existing frameworks and institute new laws becomes evident in safeguarding individual privacy and societal integrity. It is essential to remain updated on advancements in deepfake detection tools, countering their increasing sophistication. Through collective vigilance and the strategic use of tools, we strive to preserve the authenticity of our visual landscape.

Share this post