Author :- Nikita D’Lima; NMIMS School of Law, Navi Mumbai (4th Year)

Introduction:

Algorithms have become a crucial element as more than just recommender systems influence the personal choices of individuals. They govern almost everything in modern society, and while they can be helpful, they also have their fair share of risks attached to them. Ethical questions plague the minds of many, but their discussion in the present day is limited. While the ethics concerning such algorithms are primarily scrutinized from technical and philosophical perspectives, a socio-cultural perspective is a new norm.

The impending question of bias and discrimination

Algorithms affect not only individual decisions but also financial institutions, courts, government bodies, and, most noticeably, algorithmic advertisements. There are majorly six ethical concerns which are raised due to algorithms: Inscrutable evidence, inconclusive evidence, misguided evidence, unfair outcomes, transformative effects and traceability. [E21] [ND2] 

These six concerns can better be explained as:

  1. Inscrutable evidence- Data is used as evidence for a conclusion, and tracing a clear nexus between the two may lead to complexities. Limited access to data and the intricate workings of the AI itself create both practical and ethical roadblocks.
  2. Inconclusive evidence- Algorithms analyse data and draw conclusions using inferential statistics or machine learning, but the conclusions are inevitably uncertain.
  3. Misguided evidence- Conclusions can only be as reliable as the data they are based on. Algorithms are limited, as the output can never exceed the input.
  4. Unfair outcomes- Conclusions from algorithms can be found discriminatory from its effect on a protected class, even if it is based on conclusive evidence.
  5. Transformative effects- The influence of AI systems extends beyond ascribing blame to epistemological or ethical shortcomings. AI reshapes our conceptualization and organization of the world.
  6. Traceability- Due to multiple agents such as human developers or self-modifying systems, it is perilous to detect harm, find their cause and assign blame.

These algorithms can process information far more quickly and with much more depth than humans. However, this does not imply that the results are any more impartial or fair.

For instance, AI that was being used to anticipate potential criminals displayed prejudice and increased the risk of racial discrimination. This prejudice may also be seen in the workplace, particularly with the use of recent HR recruitment technologies. [E23] [ND4]  HireVue is an AI and human resources management company, which evaluated a candidate’s past performance and AI screening programme, but her application was rejected after the AI tool scored her body language poorly. The company later removed their facial analysis function.

These may assist in removing human bias from the hiring process, but AI is still capable of making incorrect decisions. This is because the programmes and processes are made by people, which might lead to prejudice.

We must intentionally examine our prejudices to ensure they do not permeate our algorithms if we want to make them more equitable. Further, a study found that higher-paying jobs in science and technology were advertised more often to men than women. 

In an interesting study concerning body weight, it was found that obese people would be stigmatised as lazy, immaterial, and unsuccessful. News and social media platforms also conform to such views, thus influencing such prejudices. Machine-learned models thus tend to refer to existing knowledge and further propagate results based on similar keywords. 

Algorithms beyond European and American spaces:

A country that has witnessed significant developments in this arena is Israel. Israel has ranked [E25] [ND6] 10th in the world for their absolute number of data scientists, and this phenomenon has developed since the 1990s.

Although they have amassed a name for themselves and are globally recognised for their algorithmic production, ethical issues are rarely debated. These Israeli companies rarely employ experts in ethics, and their engineering institutions have barely begun teaching algorithmic ethics. [E27] [ND8] 

A study was conducted among 50 Israeli data scientists, which found that ethics was more of a personal preference. It becomes increasingly difficult to follow moral codes envisaged by legislators, and they believe that once a person joins the profession, they innately resemble characteristics which are ethical for the betterment of the program that they develop.

Factors such as personal preferences and money offered tilt the person’s mind in favour of a side, and while this is acknowledged, not much is done to rectify it. There is a systematic commodification of ethics as well, and this is a human feature which many succumb to. 

AI in decision making:

A judge of the Punjab and Haryana High Court judge made headlines in 2023 for using ChatGPT, a chat-based generative AI programme, to determine bail precedents in the impugned matter.[E29] [ND10] 

When such AI systems are used, unfair and unexpected results may result from inherent biases and training datasets that not all of the users are aware of. Meaningful engagement with AI systems can be undermined by a lack of knowledge about the objectives and constraints guiding the outputs of AI systems.

In the medical field, algorithms are used to make decisions in the field of medicine. They consider factors such as the patient’s symptoms, past medical records, test results, and the like. 

Machine learning is also used to detect tumours from biopsies, and this is done by pathologists who label each image of the dataset that is put into the algorithm, categorising them into healthy or abnormal tissues.

However, ethically, it is not always possible to obtain objectively correct solutions, as the question of “morally right” is challenging. The most suitable data source in this setting would be the cases brought before the clinical ethics committee. Even then, their decisions cannot be reviewed as morally objective, but there is a need to train algorithms according to the recommendations of the committee as closely as possible. This also raises issues about its fairness.

Justice in decision-making narrows down to the outcome and whether it is fair. While preparing databases, each case must be individually assessed to not contradict or provide unfair outcomes, even if the algorithm is only advisory in nature.

A common challenge that the healthcare sector faces during implementation of an algorithm, is to ensure that existing biases do not multiply and new ones aren’t formed. Thus, representation and inclusivity while narrowing down on these cases to form the algorithm is crucial.

Most importantly, there must be transparency while an algorithm comes to a decision. The WHO also stressed the need for transparency to trace back how the decision arrived at.

For example, if a child is undergoing cancer treatment which is extremely painful but is yielding positive result due to the medication, the algorithm ought to be programmed in a way that ideally recommends continuing with the treatment. This would align with the best interests of the child and in conformity with a human’s ethical judgement.

Online personal data has now become a fast-selling commodity. The pace at which data is transferred every year is indicative that companies chase such information to benefit from it. It is predicted that a key part of algorithmic ethics, data privacy, shall disappear in the coming years as technology is bound to progress leading to complete exposure and its subsequent acceptance. What we presently see as web cookies would transform into sensors in our bloodstream, thus making it more invasive than it is today. 

Conclusion

The use of algorithms in making decisions has both practical and ethical considerations. 

There is a need for auditing algorithms or ethical algorithm design, that researchers need to incorporate. It encompasses all the kinds of behaviours that humans must ensure that algorithms avoid, so that the purpose of auditing is clear, and formulate algorithms which will nullify such behaviours. 

AI systems also face the danger of not being open and honest about how their goals, as well as the measures and criteria used to reach them, will affect society. Users have a right to know the wider ramifications of using AI systems, therefore fairness and need of such measures or goals are crucial first steps towards meaningful openness surrounding its usage. [E211] [ND12] To further this pursuit, UNESCO proposed a ‘Recommendation on the Ethics of AI’ in 2021. It called for member States taking an active role to tackle digital divides by ensuring inclusive access and equity, and participation in the development of AI.

To achieve fairness, entities that are responsible for making decisions for AI algorithms must be identifiable, and the decision-making process should be explainable.  

In a similar vein, industry-wide standards for transparency based on AI systems’ use-case characteristics can be developed through the application of technological standards established by standard-setting bodies.


Share this post