Gurman Narula & Sharad Khemka are the students of National Law Institute University, Bhopal 

  1. Introduction

Artificial Intelligence, with its advent tracing back to 1951 has grown considerably in the last few years and now has become a part of the daily lives of people living all around the world with its use being spread over various fields. However, with most AI systems there lies an inherent problem, the inner workings of an AI system are not visible, even to its developers, such systems are referred to as ‘Black Box AI’. These black box AI systems pose various challenges in the field of transparency and accountability, raising legal and regulatory concerns and the same will be discussed in the blog in light of initiatives taken by different countries.

  1. Black Box AI: Definition and the concept behind it

In simple terms, Black Box AI operates on the principle of ‘machine learning’ where it is trained on large datasets to make decisions or predictions. In ‘machine learning’, an algorithm is fed a vast amount of data and is then trained to recognize patterns and features in that data. However, complexity arises when these algorithms begin to produce accurate predictions.

The layers of computations conceal the path or calculation that leads to the final accurate result, thereby forming a ‘black box’ where the path to the final decisions remains concealed.  Thus, in such a system, issue of transparency arises, as we are unable to see how a decision has been reached by the AI. Examples of Black Box AI are many, including applications used for facial recognition to identify suspects, predictive policing, medical diagnosis to identify a disease, self-driving cars, or fraud detection in finance sectors. In all these systems, the decision-making process used to reach a conclusion is not comprehensible to human beings.

  1. Conundrums Faced by Blackbox AI: Opacity and Unwanted Outcomes

The problems with the opaque nature of such systems come into play when the decisions made by AI systems go awry or lead to unexpected and unwanted outcomes. There can be various such scenarios like where an autonomous vehicle hits a pedestrian, facial identification by AI leads to a wrongful arrest or a system fails to identify a disease. Further, AI systems can also develop biases owing to preconceptions in the training set of data or prejudiced assumptions made during the algorithm development process. These can lead to situations where AI shows racial bias as was in the cases of COMPAS where the white offenders were incorrectly judged as ‘low risk’ compared to their black counterparts and in healthcare systems where black patients were concluded to be healthier than equally sick white patients, or sexism as in the case of Amazon’s recruitment algorithm and Google’s advertisement system where the resumes were not being sorted with gender neutrality with preference being given to male candidates.

The opacity of AI systems in these situations exacerbates the challenges by hindering explainability. It becomes difficult to understand why a certain decision was made, which hinders attempts to correct mistakes and prevents effective redressal. This lack of openness impedes the widespread adoption of AI technologies by undermining user confidence and sowing doubt.

  1. The need for transparency

AI transparency refers to being open and transparent about the decision-making, operation, and behavior of AI systems. The ability to foster trust in AI systems and provide users and other stakeholders with increased assurance that the system is being used appropriately is a key driver behind the need for AI transparency and explainability. Further, transparency makes it possible to recognise and address the biases and discriminatory patterns that develop in AI algorithms as mentioned above.  

The need for transparency has been recognised by the NITI Aayog in its three-part paper titled “Towards Responsible AI for All” where the ‘Principle of Transparency’ has been mentioned for the responsible management of AI. The same has been defined as “The design and functioning of the AI system should be recorded and made available for external scrutiny and audit to the extent possible to ensure the deployment is fair, honest, impartial and guarantees accountability.” Further, the Supreme Court decisions have also been cited where the following has been stated “transparency in decision making is critical even for private institutions. The Constitution guarantees accountability of all state action to individuals and groups.”

Non-biasness has also been emphasized through the paper where the requirement of “AI systems must treat individuals under same circumstances relevant to the decision equally” has been mentioned under the ‘Principle of Equality’ and the same has been related to the Article 14 of our constitution.

Apart from this, transparency also finds a place in legislative action as in the case of the upcoming EU AI Act, where it has been recognised by the Parliament position (adopted on 14 June 2023) as a general principle applicable to all AI systems under Article 4a and states “AI systems shall be developed and used in a way that allows appropriate traceability and explainability while making humans aware that they communicate or interact with an AI system as well as duly informing users of the capabilities and limitations of that AI system and affected persons about their rights.” Article 13 also holds a similar position where it has been stated that “high-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable providers and users to reasonably understand the system’s functioning.

The EU’s GDPR with its emphasis on data protection also warrants the existence of transparent AI systems as can be interpreted in Article 15 where data subjects need to be informed of the use of an automated decision-making (i.e., AI systems) and ‘meaningful information about the logic involved.’ Such information of logic can only be provided if the system is transparent. Various other national frameworks and conventions also lay an emphasis on the requirement of transparent and explainable AI. The same can be seen in Singapore’s Model AI Governance Approach, where the guiding principles include transparency, explainability, and fairness.  Similarly, the USA’s Blueprint for an AI Bill of Rights is also in line with the need for a transparent and accountable AI in the private sector.

UK (United Kingdom) has also released a white paper regarding the same. Unlike the EU’s AI Act, the UK White Paper suggests an alternative approach to AI regulation. Instead of enacting comprehensive legislation, the UK Government aims to establish expectations for AI development and use. It empowers existing regulators, such as the ICO (Information Commissioner’s Office), FCA (Financial Conduct Authority), and CMA (Office of Communications), to issue guidance and regulate AI use within their respective scopes. The White Paper outlines a regulatory framework based on five overarching cross-sectoral principles for AI which includes promoting transparency and explainability. Initially non-statutory, the principles will guide regulators in issuing domain-specific guidance and practical tools within the next year. The White Paper foresees a potential future statutory duty for regulators to consider these principles in their decision-making processes, ensuring responsible AI development and deployment.

International frameworks and recommendation also mentions the need for transparency, as can be found in the UNESCO Recommendation on the Ethics of Artificial Intelligence’s preamble which has been signed by all 193 Member states and in OECD’s AI Principles where countries have agreed to promote AI Systems that are transparent and explainable

  • The way forward: A move towards Transparency

Transparent AI systems are crucial for understanding decision-making processes, especially when undesired outputs or biases occur. Legal and regulatory frameworks emphasise the necessity of transparency. However, implementing transparency in BlackBox AI is challenging. Further, regulating AI transparency may stifle innovation, introducing regulatory hurdles for newcomers. Increased transparency could impede AI efficiency and advancements, with no guarantee of achieving it.

An alternative approach is to hold users responsible for overseeing AI decisions and addressing unpredictability. Rather than enforcing transparency, users should supervise AI decisions to ensure rationality. Vicarious liability, like respondeat superior (making the party responsible for its agents) where AI can be construed as an agent of its user as it makes decisions to help its users is sensible when AI operates autonomously in critical contexts. In situations with a high risk of externalizing failure, like in interconnected markets or medical procedures, users or creators should bear broader liability due to the AI’s opaque decision-making.

A way forward can be to implement Explainable artificial intelligence (XAI techniques) such as LIME (Local Interpretable Model-Agnostic Explanations), SHAP (Shapley Additive Explanations) and DALEX (Descriptive Machine Learning Explanations) which can make the decision-making process of AI systems comprehensible to humans through the development of another AI meant to do the same and thereby increasing trust in the outputs produced by such systems.

In conclusion, addressing the challenges posed by Black Box AI requires a delicate balance between transparency and innovation. While legal frameworks and global initiatives emphasize the necessity of openness, alternative approaches, such as user oversight and explainable AI techniques, offer potential solutions to navigate the complexities of AI decision-making, fostering trust and responsible advancement.

Share this post