By Mayank Bhandari and Ishika Chauhan, third year students (5 Year B.A. LL.B.) Dr. Ram Manohar Lohiya National Law University, Lucknow.
Ever wondered when you were looking for top law schools, your social media handles started showing you advertisements of CLAT Coaching? Is it just a coincidence, or is there a correlation? In this article, the authors discuss how this constitutes correlation by analysing targeting audience service. The authors also seek to discuss how intermediaries while targeting audience through algorithms, by definition, escape selecting the receiver norm under Sec 79 of IT Act. Subsequently, the authors analyse (a) the circumstances in which this targeting is arguably not approved by law under Sec 79; and (b) the defences taken by social media intermediaries to justify their business model based on targeted advertisement.
Social Media Intermediaries
Section 2(w) of the Intermediary Guidelines 2021 (“2021 Guidelines”), provides that, “Social Media Intermediary is an intermediary which primarily enables interaction between two or more users and allows them to create, upload, share or access information using its services.” It means that an intermediary is the virtual equivalent of a middleman between its users. These intermediaries provide services like messaging, accessing third party content, posting content etc, to users for free by charging ‘business accounts’ (who are ostensibly their actual customers) for targeted advertising. Targeted advertising is advertising the products to specific audiences as requested by business accounts to the consumer per their traits, interests and preferences as specified by businesses. For example, intermediaries may target advertisements pertaining to CLAT coaching institutions to individuals who search for top law schools.
Algorithmic Targeting
The algorithms of platform providers like Facebook, Instagram etc., know about user’s interests and preferences by carefully observing search patterns. As noted by Delhi High Court in a recent case against intermediary, “The algorithms select the content based on various factors including but not limited to social connections, location, and past online activity of the user. These algorithms are often far from objective with biases capable of getting replicated and reinforced.” For example, if a person recently wish listed some Levis Product on Myntra, then Instagram based on this user activity projects same products again and again until the user completes his purchase.
They tie up with web platforms and search engines like Google, Amazon etc., and then project such advertisements accordingly. Intermediaries such as Facebook and LinkedIn are protected under safe harbour provision under the Information Technology Act (“IT Act”) depending upon the role played by them. The protection is provided in the form of immunity from liability against any third-party content being posted.
Human vs Algorithmic Targeting Liabilities
It is important to understand how the process of operating targeted advertisement controlled by a human being is different from it being controlled by an algorithm. Knowledge can be established to a human being while selecting a receiver whereas no “actual knowledge” can be established in the latter case, as it becomes an automatic process controlled by an algorithm. The introduction of these new guidelines can incentivise the use of automatic selection by the algorithm for targeting the advertisement.
In this part, we are restricting our discussion to social media and e-commerce intermediaries. Intermediaries’ guidelines which are a part of delegated legislation under Section 87 of IT Act, works as supersession on IT Act. Intermediaries Guidelines of 2011 and 2021 Guidelines provide for due diligence to claim safe harbour defence. Intermediary Guidelines 2011, Rule 3(3) has clearly stated that when an intermediary selects the receiver of the transmission, it plays an active role and hence cannot claim safety harbour. However, the 2021 Guidelines, introduced a new term called ‘targeted advertisement’ for better understanding and specifically states that an intermediary can claim safe harbour protection if it displays that a particular advertisement is a ‘paid advertisement’. The impact of this rule is that the users would be able to distinguish whether the particular post being watched is categorised under paid promotion or not. This will increase consumer awareness while watching ads. The Hon’ble Delhi High Court in Myspace Inc. v. Super Cassettes has laid down three conditions for social media platforms to follow, in order to claim the safe harbour defence. The first condition requires the intermediary to play a passive role such that it should not have been responsible for initiating the transmission, selecting the receiver, or modifying the content. The second condition requires the intermediary to not ‘abet’ in the commission of the unlawful act, in which case it would lose its immunity. Additionally, it also requires an intermediary to expeditiously take down content on receiving actual knowledge of the same or on being notified by the Government. The third condition requires the intermediary to ‘observe’ guidelines issued by the Central Government under Section 79.
How targeted advertising is not in consonance with these broad parameters?
In Amazon v. Amway India., it was argued that this targeted advertising feature offered by e-commerce platforms abets its direct sellers to break their contracts. This is an unlawful act as per the section 27 of Indian Contract Act because the same can be regarded as tortious interference with contract. This particular act of Amazon arguably breached the aforementioned second parameter, i.e. commission of an unlawful act. Additionally, the defendant, Amway, made contentions of tortious interference against Amazon for inducing its direct sellers to breach their contract. Further, they also argued that the plaintiff’s code of conduct for direct selling does not allow sellers to sell its products online.
The essential ingredients of the tort are knowledge of the relevant contract and intention. Furthermore, there must be interference that prevented or hindered the performance of the contract coupled with the use of unlawful means.[i] It is common market practice for direct selling businesses to make available such contracts for public consumption through various material means like their websites. The Indian Direct Selling Association itself stipulates the code of ethics stating that the distributors are barred from selling their products on e-commerce websites. Amazon knowingly entered into such a contract for advertising and selling such products with direct sellers of Amway. This was alleged as tortious interference as direct sellers apprehended such targeted advertising will multiply its sales. In this case, the single judge of Delhi HC decided in favour of Amway, which was later reversed by the Division Bench. The same is pending before the Hon’ble Supreme Court.
It was held by the Division Bench that such knowledge of contract could not be attributed to Amazon as it was totally voluntary on the part of direct sellers to contract. The Single-Judge Bench held that such knowledge, even if not attributed before where such a contract was only published at the website of Amway, can certainly be attributed later when Amway sent notice of information regarding such contract. The authors side with reasoning given by single judge judgement because intermediary cannot always claim ignorance of knowledge. Any reasonable person is expected to take an action when the legal notice of such violation has been sent by the authorised company. Hence, the Hon’ble Supreme Court may is expected by the author to attribute the existence of knowledge on the part of an intermediary in such future cases. This is because as per new intermediary guidelines, intermediary has to act within 36 hours of such legal notice. This will make application of Shreya Singhal judgement proportionately outdated with existence of new guidelines.
How Targeted Selling amounts to violation of the First Parameter and leads to actively selling a product?
In the landmark judgement of L’oreal v. eBay where eBay was the one actively advertising the unauthorised products of L’oreal by targeting audience. This was done through the automatic algorithm without any actual knowledge. The High Court of England and Whales (Chancery division) held that because “a keyword is the means used by an advertiser to trigger the display of his advertisement,” such method amounts to using a mark “in the course of trade” within the meanings of Art. 5(1) and Art. 9. of Council Directive 89/104/EEC. Hence, it was actively involved in selling such products. Therefore, targeted advertising can be said to be actively selling, whether or not it was done by an algorithm. Hence, safe harbour protection cannot be claimed.
Additionally. in Snapdeal v. State of Karnataka, it was held that an intermediary would not be liable for any action of a seller making use of the facilities provided by the intermediary in terms of a marketplace. But this judgement did not opine on social media intermediaries, which not only provide a marketplace but also target audiences through advertisements. Therefore, this judgment would not have any persuasive value in cases where social media intermediaries are actively involved in the sales process.
Under the current regime there are many unanswered questions in Indian law regarding what makes intermediaries as active seller and what makes them passive? Whether putting amazon choice label on products listed on amazon is active selling? The Amazon case, which is pending before Hon’ble Supreme Court, is expected to answer the same. It will discuss when knowledge can be attributed on part of intermediaries. In opinion of the authors these intermediaries should be made liable after the existence of knowledge is established on part of intermediaries during algorithmic targeting. The decision of Supreme Court in this case will be significant in terms of explaining how such bias can be established on part of intermediaries in cases of selecting the receiver of targeted advertisement even through algorithms.
Conclusion
Social media platforms are a fusion between mere infotainment channels and media content providers and as a part of their service, these platforms not only host the content, but they assemble it, make it searchable, and in some cases even algorithmically select some sub-category of it to produce as front-page offerings, news feeds, or customised recommendations. Therefore, content moderation is the quintessence of these Social-media platforms, this moderation, is in fact a crucial, persistent, and definitional part of what platforms do. Moderation is the essence of platforms, their basic working scheme. It is the service they offer.
Hence, transferring decision-making powers from non-transparent and fallible human beings to transparent and automated algorithms would ensure that the intermediaries cannot, by definition, violate 2021 Guidelines. Indeed, this is part of a broader narrative about the inerrancy or, at least, the neutrality of techno-driven decision-making, popularly known as ‘automation bias’.[ii] But this narrative fragments upon the touchstone of transformative equality in various ways. It fails to notice the human bias that exists at the time of collection of data. Even when there is no deliberate bias at the time of collection, the process of collection will always imbibe existing patterns and structures of discrimination, which will then be fed into the model that constitutes the algorithmic system.