Aradhay Pandey is a 2nd year student at City Law College, Jankipuram, Lucknow, Uttar Pradesh & Utkarsh Shukla is a 2nd year student at Rajiv Gandhi National University of Law, Patiala, Punjab
- Introduction
Consider Lakshmi, a widowed Adivasi woman from Jharkhand who applied for a government welfare scheme to support her daughter’s education. An AI-powered system trained on biased data rejected her application while approving similar applications from affluent households. This isn’t individual prejudice but an unseen face of systematic, algorithmic bias in India’s e-governance-centric landscape. Bias in AI can be defined as a system’s systematic prejudice resulting in inaccurate outcomes for specific persons or groups, which further leads to discrimination based on race, caste, sex, etc. Siva Mathiyazhagan, in his report to Thomson Reuters, stated that “if a chatbot is asked to name 20 Indian doctors and professors, the suggestions are generally Hindu-dominated-caste surnames.”
The blog discusses the fundamental issue with automated systems, which poses a serious threat to the concepts of fairness and inclusion in India’s e-governance systems. It advocates for a comprehensive strategy for risk mitigation, including legal, ethical, and technological measures, to ensure that India’s digitisation goals do not jeopardise equity. The following sections delve into specific examples of AI bias, the legal and ethical issues raised, and practical solutions for an equitable, digitally inclusive society.
- AI biases in Indian e-Governance
Artificial Intelligence systems are primarily designed to revolutionise welfare by streamlining resource allocation, reducing administrative burden, and improving public services. However, they also reflect potential signs of bias, which are exemplified in the following sections.
- Bias in Welfare Programmes
The Aadhar-based digital identity systems that are vital to Indian welfare programs frequently exclude marginalised people, limiting their access to critical services. For example, Santoshi Kumari died from malnutrition when her family’s ration card was withdrawn because it was not linked to Aadhar. Similarly, Shrimati Devi’s pension of Rs.1000 per month was incorrectly transferred to someone else owing to a banking system error, infringing on her fundamental right to life under Article 21 of the Indian constitution. These cases highlight existing shortcomings in Aadhaar-linked benefit schemes, which use computerised screening to avoid leakages. However, they worsen the vulnerabilities of historically oppressed groups. Technical failures and authentication concerns especially exclude Dalits and women, while algorithmic biases built-in digital governance disproportionately advantage urban and upper caste groups, exacerbating existing disparities. Without systematic structural reforms, these technological initiatives exacerbate social hierarchies rather than alleviate poverty and marginalisation.
- Biased Criminal Justice Systems
AI’s application in India’s criminal justice system, such as predictive policing and Facial Recognition Technology (FRT), has raised ethical concerns, disproportionately impacting marginalised communities. The Predictive Policing models rely on historical crime data, which is skewed and over-represents Dalit and Muslim communities, thereby labelling them as high-risk offenders, leading to increased surveillance and over-policing. Similarly, the FRTs used for law enforcement demonstrate a high error rate in identifying these individuals, increasing the likelihood of false arrests and accusations. These AI-driven systems overlook the cornerstone principle of presumption of innocence as they reinforce bias rather than ensuring fair law enforcement. Predictive policing operates on flawed data which reflect the existing prejudices rather than the objective risk assessment. Additionally, the use of FRT erodes civil liberties through invasive surveillance and disproportionately targets historically marginalised groups. This bias exacerbates social exclusion rather than promoting justice and equity.
These biases stem mostly from an inadequate representation of India’s diverse population in accessible databases. According to the various tech experts, the lack of a policy for ethical AI use will entrench the age-old biases, which is evident by the data showcasing that the people living on fringes are significantly affected by the biased datasets, and since AI models learn on existing data, their performance is dependent on the fairness and inclusivity of the datasets on which they are trained. As a result, the lack of proper representation of these communities leads to majority-centric AI decisions, rendering e-governance systems inherently inequitable. These biases emphasise the critical need for more inclusive data representation and ethical AI regulation. Without immediate intervention, these technologies will continue to disproportionately impact the vulnerable population.
- Legal & Ethical Dimensions of AI Bias in India
India’s Digital Personal Data Protection Act, 2023 (DPDP), primarily focuses on data privacy without provisions for algorithmic fairness, which leads to discriminatory practices in critical sectors like hiring and lending. Unlike the global frameworks, i.e., the EU AI Act, it lacks measures for algorithmic audits to detect discriminatory patterns and the bias impact assessments for evaluating the societal risks of AI systems. Additionally, the DPDP act exempts publicly available data from its purview, enabling the unchecked use of data. This data often contains historical biases, which amplify existing societal biases without any oversight or accountability measures. Unlike California’s proposed legislation, the absence of anti-discrimination clauses renders the marginalised groups vulnerable to automated decisions, exacerbating their existing disadvantages. The proposed accountability measure under India’s Complex Adaptive System Framework to Regulate Artificial Intelligence (CAS Framework) remains unenforced to date, leaving AI governance fragmented.
Addressing the AI bias in Indian e-governance is an ethical necessity to prevent the deepening of social inequalities and ensure that technology empowers rather than marginalises vulnerable populations. Drawing from the findings of Virginia Eubanks in her book Automating Inequalities, wherein she highlights that the modern digital arena creates a digital poorhouse, adding to the vulnerabilities of marginalised communities. She argues that the digital allure has shifted societal focus from the collective responsibility of addressing poverty, leading to models which lack accountability and compassion. Her finding calls for a critical examination of digital technologies, where ethical AI frameworks ensure transparency, fairness, and accountability, thereby essentially ensuring that they serve as tools for empowerment rather than oppression.
Furthermore, implementing robust AI bias detection mechanisms is critical for India to avoid exacerbating social inequalities through automated decision-making systems. Drawing from the global frameworks, specifically Articles 10(2)(f) and 10(2)(g) of the EU AI Act, provides the actionable solutions to address AI bias risks unaddressed by the DPDP act. The requirement of rigorous dataset scrutiny to prevent health, safety and fundamental rights violations, the European framework contrasts with India’s exemption of publicly available data. In the modern digital arena, implementing the EU-style regulatory sandboxes (Article 57) could effectively enable the stress-testing algorithms against the deeply entrenched inequalities, for instance, the loan model that disadvantages the Dalit entrepreneurs or FRT failing on people with darker skin. Such measures would effectively counter the “Digital Poorhouse” effect by ensuring that welfare-oriented AI systems do not amplify historical prejudices, transforming India’s fragmented AI governance into a vehicle for equitable empowerment.
- Mitigating AI Bias: Towards an Equitable Digital India
The growing digital landscape of India gives rise to serious concerns about misinformation privacy and violation of legal and human rights. Therefore, an urgent need for effective AI governance, which is ensured by a whole-of-government approach involving transparent collaboration of stakeholders and ensuring that AI developers and deployers work together. The case of Moffatt v. Air Canada highlights the need for organisational-level management of AI mechanisms to prevent the spreading of misleading information.
The biases from AI systems can be effectively removed firstly at the stage of training the model, whereby the developers must work towards ensuring that it avoids group prejudices by ignoring attributes like race, class, and gender.
Secondly, as pointed out by one of the researchers at MIT-IBM Watson AI Lab, “it’s not the algorithm that’s to blame, it’s the data,” ensuring that the datasets adequately represent India’s diverse population, including the rural and marginalised populations, increases the effectiveness of these systems. To attain this, public institutions could mandate the collection of disaggregated data by gender, caste, and region to create more inclusive data sets.
Thirdly, regular audits of AI systems should be inculcated in the Indian policy-making around AI. The references could be drawn from IBM’s AI fairness 360, a source Library designed to help data scientists detect and mitigate biases in the machine learning model, and the Aequitas Toolkit, which highlights how the fairness audits into regular model evaluation processes can be integrated at the organisational level thereby ensuring transparency through self-regulatory practices.
- Conclusion
While AI is continuously evolving and has incredible opportunities, it comes with several shortcomings, like biases and privacy concerns, and it is a need of an hour for India to take a proactive approach towards AI governance. The report on AI governance by MeitY provides a strong foundation for the same while emphasising the pillars of accountability, transparency, safety, and fairness. However, the regulations, alongside being strong, need to be flexible by ensuring that they evolve with technology to keep up with rapid advancements in AI Additionally, the importance of collaborative efforts of developers and deployers cannot be overlooked. Lastly, there is a need to strike the right balance to create an environment where AI can flourish ethically, which would enable India to harness AI’s potential for national progress and not compromise public interest.