Sharanya Chowdhury is a 4th year student of Dr. Ram Manohar Lohiya National Law University

Abstract

The article delves into an analysis of Indian laws pertaining to Non-Consensual Intimate Images (NCIIs), contextualized within recent litigation on the subject. The focus is primarily on the responsibilities of intermediaries in addressing NCIIs, the availability of relevant technologies, and critiquing prevailing approaches to media recognition AI. The article highlights the necessity of scrutinizing and revising existing legal frameworks and technological capabilities to effectively tackle the proliferation of NCIIs online. By challenging conventional narratives and advocating for an approach that holds space for interim solutions.

Introduction

In the recent case of X v. Union of India, (2023) 3 HCC (Del) 63, the legal challenges faced by Significant Social Media Intermediaries (SSMIs) in regulating Non-Consensual Intimate Images (NCIIs) have garnered significant attention. This case serves as a crucial backdrop for discussing the complexities surrounding NCIIs. In essence, the litigation centres on the proliferation of NCIIs online, highlighting the enduring impact on victims’ reputations long after culpability is established. The internet’s inherent domino effect exacerbates the situation, perpetuating the harm caused by the initial dissemination of intimate content without consent. Thus, this case underscores the urgent need for robust legal frameworks and effective measures to address the detrimental consequences of NCIIs in the digital sphere.

The case, through Microsoft Corporation v. Union of India & Ors has further spilt over to discourse around intermediary and search engine responsibility and their abilities to take action against such content. The continuing litigation has presented key challenges that need to be worked on in the present scenario.

1. Handling of Non-Consensual Intimate Imagery by Search Engines

The current method of removing NCIIs from search engines involves reporting the specific URLs that disseminate the NCII. This is, firstly, a traumatising process for the victim. Socially speaking it is often not even feasible because women with a lack of social support system are often the target of such violent crimes. Secondly, these URLs reproduce faster than rabbits; even a person with superhuman willpower cannot reasonably be expected to continually provide access to URLs to facilitate the removal of such content. This argument underscores the practical difficulties faced by individuals or entities in consistently monitoring and addressing the proliferation of harmful content online, particularly in the absence of technological solutions.

According to Google and Microsoft, the challenges surrounding the automatic take-down of NCII lie in the lack of technology. It has been claimed that while such technology is in the process of evolvement, at its current stage, it is both inaccurate and insufficient to deal with the existing threat.

2. Why Can AI Target CSAM and not NCIIs?

Although technology addressing this issue exists, it primarily targets Child Sexual Abuse Material (CSAM). CSAM detection technology is designed to identify and remove child pornography from search engines automatically. This technology relies on artificial intelligence (AI) algorithms that excel at categorizing individuals by age, hence its effectiveness in some cases. However, Non-Consensual Intimate Images (NCIIs) encompass a broader spectrum, including not only media related to sexual crimes but also consensual acts filmed and distributed without consent. Tech giants argue that current AI capabilities lack the nuance to discern the context or intent behind such content. As a result, existing AI systems struggle to accurately identify and address NCIIs beyond the scope of CSAM. This limitation underscores the need for advancements in AI technology to combat the proliferation of NCIIs online effectively.

3. Is AI Training Around Consent the Only Answer?

While the arguments presented thus far are compelling, a sidelined aspect of the discussion revolves around the AI’s capacity to identify media that closely resembles the original, and subsequently remove such content. Tech giants have raised concerns regarding AI’s ability to achieve this task effectively. They argue that factors such as changes in file format, pixelation alterations, and watermarks pose significant challenges to the AI’s recognition capabilities. Google contended that although such images are exactly the same to the human eye, the same cannot be said for AI. To illustrate, this task can be likened to conducting an image search on the internet and removing media that closely matches the search query. However, the complexities involved in accurately identifying and removing such content emphasise the need for further exploration of the AI’s abilities in this regard.

4. Why things are worse than they look

Under S79 of the Information Technology Act 2000, intermediaries are exempted from liability when they have no ability to regulate or edit the content of what is being uploaded on their platform. The same applies to Google and Microsoft. While an additional level of due diligence may be attributed to certain intermediaries, those are Significant Social Media Intermediaries which have been defined under S2(v) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 and Google, and Microsoft do not form a part of that category. The Intermediary’s duties around NCII have been provided in Regulation 3. The only legal obligation that falls on these giants is to comply with the grievance redressal mechanism and follow the guidelines given in X v. Union of India which is evidently lacking in effectiveness. Further even if such technology exists, its non-usage could non be penalised in a way that would force compliance.[1]

5. Why the Argument of ‘Individual Responsibility’ is a Losing Battle

Before delving into discussions on navigating the ‘intricacies of internet behaviour’, it’s crucial to establish one fundamental point: individual responsibility alone cannot serve as the solution. Proposing that individuals should “bear the burden of safeguarding themselves against online exploitation” is akin to suggesting that women should adhere to 6000-page exhaustive guidelines to avoid sexualization—an impractical and unjust expectation. This stance fails to address the multifaceted nature of the issue, which encompasses not only non-consensual sexual acts but also the dissemination of such content, both of which demand comprehensive discourse.

Furthermore, in an era dominated by AI and deepfake technology, the potential for exploitation knows no bounds, rendering personal responsibility ineffective as a defense. According to a 2019 report from Sensity, a company specialising in the detection and monitoring of deepfakes, a staggering 96% of deepfakes were of a non-consensual sexual nature, with 99% of these involving women as the subjects. This concerning trend extends beyond celebrities, as it has become increasingly common to commission custom-made deepfake pornographic content featuring individuals of one’s choosing. Emphasizing personal responsibility perpetuates victim-blaming narratives and imposes unjust limitations on individuals, particularly in marginalized groups, hindering their freedom to navigate in the digital realm. Such limitations have profound social implications and threaten the livelihoods of content creators and influencers who rely on digital platforms for expression and livelihood. Therefore, any approach to addressing online exploitation must prioritize systemic solutions that tackle root causes and empower individuals without placing undue burdens on them.

6. What does the solution look like?

The author, across various platforms, has consistently underscored the urgency of regulating sexually explicit media on digital platforms. To effectively eradicate Non-Consensual Intimate Images (NCIIs) from the digital landscape, it becomes imperative to comprehensively understand the nuances of Consensual Intimate Imagery (CII). This necessitates rigorous research within the field, coupled with the integration of such findings into policymaking processes. Further, we need to change the approach and look into finding a temporary solution by creating systems which have humans as the initiators and course-correctors and AI as the executor. While the evolvement of a nuanced AI is imperative and the eventual goal, this approach acknowledges that while striving for a permanent solution akin to a vaccine for the disease, the lives lost in the absence of a stop-gap measure cannot be disregarded


[1] https://dhcappl.nic.in/dhcorderportal/GetFile.do

Share this post

Leave a Reply

Your email address will not be published. Required fields are marked *