Vaishnavi Singh is a 3rd year student at Ram Manohar Lohiya National Law University, Lucknow. & Abhijeet Raj is a 3rd year student at Guru Gobind Singh Indraprastha University, Delhi.

Introduction

In July 2023, first case of deepfake fraud was reported in Kerala, where a 73-year-old man became victim of a deepfake scam, causing him a grave loss of Rs. 40,000. In November, 2023, another senior citizen was extorted in UP, by using a deepfake video of a retired-IPS officer. In late 2023, a deepfake video surfacing over social media, supposedly displaying Indian actress Rashmika Mandanna, had left netizens in confusion. In a notable observation, the Delhi High Court in August, 2024 had called upon the Union, remarking deepfakes as a “serious menace in society”.

Easily and effortlessly creatable AI videos, containing almost flawless hand and shadow movements, can occasionally dupe even digitally-expert people. It further brings our attention to the fact that it is 2024 and to date, India does not have any dedicated legislation to deal with the misuse of AI in the form of deepfakes. The IT Act, 2000 currently in place, a 24-year-old legislation, makes it an outdated law that does not seem to be a good approach to deal with the new and upcoming challenges relating to AI.

India recently introduced three new criminal laws, replacing the age-old colonial codes. However, despite the “self-proclaimed new-age penal laws,” none of the laws sufficiently address the exploitation of AI and deepfakes that put the digital security of citizens, especially women at risk. This raises serious concerns regarding the adequacy of India’s existing legal-tech framework to deal with the misuse of AI-generated content.

Artificial Intelligence And Deepfakes: Innovation, Rise And The Threat

Artificial Intelligence is a term used to refer to machinery and computer systems performing human-like cognitive functions, e.g., learning, speaking, understanding and reasoning. With some earliest models of AI attempting to simulate the functioning of a single neuron, it is now revolutionizing various disciplines and industries with more sophisticated functions.

While the developing technology has increasingly been praised for its impressive capabilities in the fields of space research, robotics and automation, medical science and healthcare, it has given rise to one of the most controversial technological advancements in the domain – the “Deepfake Technology”, or “Deepfakes”.

Deepfakes, an embryonic type of threat categorized under a more pervasive range of synthetic media, utilize artificial intelligence/machine learning algorithms and facial-mapping software to generate realistic videos, pictures, audios and/or texts of people and events that never actually existed, or occurred.

Deepfakes have wide-ranging legal and moral implications – fabrication of criminal evidence, misinformation and digital impersonation, increased risks of privacy violations, improvised forms of sexual offences against women and minor females, including digital obscenity and deepfake pornography. Studies show that about 84% of social media influencers on platforms like, Instagram and YouTube fall victim to deepfake pornography, out of which nearly 90% are female. Unfortunately, the National Crime Records Bureau also has yet to formulate a separate category for deepfakes and AI-facilitated abuse in its Annual Reports.

Numerous cases are being registered with no definite solution, including a May, 2024 petition seeking order for the Election Commission of India to create and enforce guidelines against the misuse of deepfake-tech during the 2024 General Elections, underlining the exploitative use of AI and deepfakes in India’s inadequate legal-tech framework.

The New Penal Laws Of India In The Era Of Artificial Intelligence And Deepfakes

Are the new Penal Laws sufficient for criminal liability of deepfakes and AI-related crimes?

Despite much protest – the Bharatiya Nyaya Sanhita, 2023, the Bharatiya Nagarik Suraksha Sanhita, 2023 and the Bharatiya Sakshya Adhiniyam, 2023 – came into effect as of 01 July 2024, thereby replacing the erstwhile Indian Penal Code, 1860, the Code of Criminal Procedure, 1973 and the Indian Evidence Act, 1872, respectively.

Despite the government’s vision of inclusion of a more justice-oriented and technology-friendly approach into criminal procedures, the new laws are significantly deficient with respect to a crucial area, that is, criminal liability of offences relating to AI-facilitated technologies, like deepfakes.

The Bharatiya Nyaya Sanhita, 2023 adds a new provision namely, “Organised Crime” under Clause 111 (1) of the Sanhita. The clause leaves ambiguous terms like “cyber-crimes having severe consequences” open to interpretation, while defining “organised crime” in an exceptionally broad way, encompassing anything from economic offences to cybercrimes. It is the singular provision that relates to the term – “cyber-crimes” – and that as well, to a limited extent, may not include various improvised offences relating to AI-facilitated technologies, like deepfakes. Moreover, the Bharatiya Nagarik Suraksha Sanhita, 2023 and the Bharatiya Sakshya Adhiniyam, 2023 also do not provide for a separate criminal procedure for the offences mentioned in Clause 111 (1) of the Sanhita, which further raises concerns over its credibility and proficiency, as far as penal provisions relating to AI-related crimes and deepfakes are concerned.

Global outlook on the issue

Globally, various nations have come up with dedicated amendments and separate legislations relating to deepfakes and AI-related crimes:

The European Union came up with its first – Artificial Intelligence Act (“EU AI Act”), dealing with the usage and regulation of AI. The EU AI Act also deals with necessary regulations concerning deepfakes. In 2023, the UK government introduced reforms in its Online Safety Act and for the first time, criminalised the sharing of deepfake intimate images.It also came up with amendments in its Criminal Justice Bill and criminalised creating horrific images without consent. Additionally, countries like the United States (Deepfakes Accountability Bill, 2023) and China (Artificial Intelligence Law of the People’s Republic of China) have also introduced Bills to label deepfakes on online platforms, failing which would attract a criminal sanction.

But when it comes to India, it is the backstage in terms of even introducing such ideas. It has certain provisions under the IT Act, 2000, Section(s) 66D, 66E, 67, 67A, and 67B which provide for provisions to prosecute offenders of acts such as violation of privacy and publication, or transmission of obscene content. However, the present provisions could not effectively deal with the procedural aspect of AI-facilitated crimes, and they come into execution only after the offence has been committed. For instance, the EU AI Act provides for the labelling of deepfakes at the initial stage itself, but in India, there is no such regulation that could effectively deter the commission of the crime at the stage of initiation. This in turn creates a procedural barrier for the offenders.

Way Forward

The government must establish a robust framework to address AI-facilitated crimes and deepfakes, given the evolving legal-tech landscape. Introducing specific chapters in new penal laws addressing AI-related crimes and deepfake technology, including procedural and evidentiary measures; amending existing laws, the IT Act, 2000, and IT Rules, 2021, for a more immediate takedown of offensive deepfake content upon receiving victim complaints; and enacting a dedicated law to regulate AI-related crimes, enhancing the authority of law enforcement agencies like, the Intelligence Fusion & Strategic Operations of Delhi Police could be the possible solutions.

Artificial Intelligence and its technologies – like deepfakes – despite being at an infant stage, pose a significant threat in the long term, and the government’s prompt attention towards the said direction is the need of the hour.

Share this post