Reeva Khunud K Pathan is a 3rd year student of CHRIST (Deemed to be University)

  1. INTRODUCTION

Artificial intelligence has been a part of our lives since time immemorial, even longer if we account for Hellenic culture, which considers technology the ‘breath of the divine’. The aim of the human race is evolution, and technology plays a pivotal role in the development of information and technology. Technology seems to play a significant role in Darwin’s Theory of Natural Selection, with those unable to adapt simply being discarded and those adapting to ensure the survival of the fittest. However, the question remains whether such fast-paced developments ensure the survival of the fittest or go against the very nature of evolution.

  1. IMPACT OF DEEPFAKE TECHNOLOGY ON WOMEN 

The technological landscape has also given rise to AI deepfakes. This technology involves using a person’s physical likeness, with or without their consent, to create manipulated images, videos, or other media content. While it is beneficial as a translation tool, it is prone to misuse due to its free nature and pervasiveness.

For centuries, women have endured oppression from their ostensibly superior counterparts, and this persists even today. Despite the narratives of empowerment touted by social media, the reality remains starkly different. Rather than ushering in genuine equality, technological advancements have merely devised new methods to victimise women while preserving the very structures that render them vulnerable and reliant on men for “protection.”

In contemporary times, as illustrated by popular movies like Ex Machina, the perception of women remains largely unchanged. The film portrays the female robot as a hyper-sexualized entity, reinforcing the “dehumanisation” aspect of female representation. Depicting female robots as mere objects, despite their non-human status, reinforces harmful stereotypes and objectification of women. This portrayal strips them of humanity and impacts real women’s treatment, perpetuating dehumanizing views. The essence of women should be respected regardless of biological or artificial nature.

Although women of the present generation have made significant strides from being treated as mere property to being recognised as human beings, deepfake technology is charting a troubling course. It perpetuates the treatment of women as objects accountable to men, now through the lens of artificial intelligence.

Deepfake porn has become a major issue in countries, especially India, where three per cent of websites containing deepfake porn were Indian websites, with women, particularly journalists, being impacted the most. Female journalists are targeted as a political vendetta for an article published against a political party or to augment the patriarchal mind set of the shame that is perpetuated by women reporting a particularly bad piece on a male politician. An example of this is the deepfake porn video the journalist Rana Ayyub circulated due to her stance against the ruling party for spreading anti-Islam sentiment.

  1. LEGAL ASPECTS OF DEEPFAKE TECHNOLOGY

Deepfake technology is a nuanced problem and is often swept under the rug. Even with the initiatives of BRICS countries to develop a BRICS Science, Technology and Innovation Work Plan 2019-2022, the program only aims to facilitate further cooperation between nations on the CyberBRICS platform established in 2019. It aims to combat blatant data protection and cybersecurity issues, yet it completely overlooks the violation of a woman’s personhood. Moreover, it fails to offer any semblance of support for women who have been exposed to millions online in the most egregious manner.

Recently, the introduction of the National Defense Authorization Act (NDAA), 2024, aims to file a comprehensive report on deepfake technology for fraudulent activities during elections. However, the act does not refer in any way to the problems suffered by women at the hands of deepfake technology. Despite the lack of a legal framework to regulate such technology, certain states in the US, like Virginia, have created comprehensive legislation against deepfake technology. Notably, even the European Union does not have a comprehensive law against deepfake technology and despite formulating a policy to tackle disinformation does not address the problems caused by deepfakes to women.

India, for that matter, relies on the Indian Penal Code of 1860 and the Information Technology Act of 2000 for cybercrimes, even those against women. However, there is no comprehensive legislation for the protection of women against deepfake pornography. 

However, it would be remiss to state that there is no legal precedent or principled approach for protecting women against deepfake technology, at least in the US, as highlighted by the Scarlett Johannsson and Open AI controversy. Scarlett Johannsson received an offer to voice the current GPT 4.0 system, “Sky,” after much contemplation, she refused the offer. However, Johannsson was forced to hire legal counsel when, despite her refusal, Sky’s voice sounded “eerily similar” to hers. She sent two letters to OpenAI to detail the process of creating Sky. Consequently, OpenAI refused to divulge the name of the voice actress they hired, citing privacy concerns, but reluctantly took down Sky’s voice.

It is profoundly clear from the Johansson incident that deepfakes are a flagrant violation of intellectual property rights and the right to publicity. This gives an individual the right to control the use of their likeness commercially and prevent the unauthorised monetary exploitation of their persona. Johannsson’s voice was used for endorsement while not exclusively falling in the copyright domain. Moreover, since she is a public figure whose voice from the movie ‘Her’ is distinguished and recognisable, which led to OpenAI making the offer, her voice would most certainly fall in the domain of copyright and trademark infringement. Scarlett Johannsson’s relief would be based on the legal precedent of the Bette Midler case against Ford Motor Company, where despite turning down the offer to perform one of her songs in a car commercial, it was done regardless by a backup singer the company had hired. Midler sued the company, trying to profit off her voice, and she was successful since the voice did not belong to Midler. Furthermore, it was deceptively similar to Midler and confused the audience to believe it was her voice.

It is also noteworthy that OpenAI is embroiled in several lawsuits, with many artists claiming that the company uses creative work to train AI models without obtaining permission or consent. Hence, considering the slew of lawsuits, it is not far-fledged to state that this is a direct violation of the right to publicity, which has been evolving since the Midler case. The problem here lies in the fact that OpenAI is accused of copying the work of persons and not merely style, which would not legally violate the right to publicity or any intellectual property rights. Hence, the use of the physical disposition of a person can be sued for in the USA under the right to publicity.

  1. CONCLUSION

Laws worldwide have a distinct lacuna regarding laws regulating deepfake technology. The only law that somewhat encompasses such protection is decades old and set by legal precedent. Even such a legal precedent does not protect women in general; it protects a specific set of women visible in the ‘public eye.’

Despite technological advancements and the initiatives by global entities like BRICS and specific US states, there remains a stark absence of comprehensive legislation to protect women from the misuse of their likeness through deepfakes. As technology evolves, there is an urgent need for robust legal frameworks worldwide to safeguard against the misuse of AI and ensure the protection of all individuals.

Share this post