There is need for a legal, organisational framework to regulate bias in algorithms
The extant law in India is glaringly inadequate. Our framework of constitutional and administrative law is not geared towards assessing decisions made by non-human actors.
What is an algorithm, and what is the big deal about permitting it to make decisions? After all, it is merely a set of instructions that can be used to solve a problem. The reasons for the increasing reliance on algorithms are evident. First, an algorithm can make decisions more efficiently than human beings, thus indicating its superiority to human rationality. Second, an algorithm can provide emotional distance — it could be less “uncomfortable” to let a machine make difficult decisions for you.
However, algorithms are susceptible to bias — and machine learning algorithms are especially so. Such bias may often be concealed until it affects a large number of people. We should examine their potential for bias as algorithms are being used to make evaluative decisions that can negatively impact our daily lives. Algorithms are also dictating the use of scarce resources for social welfare.
The use of AI in governance in India is still nascent. However, this will soon change as the use of machine learning algorithms in various spheres has either been conceptualised or has commenced already. For example, the Maharashtra and Delhi police have taken the lead in adopting predictive policing technologies. Further, the Ministry of Civil Aviation has planned to install facial recognition at airports to ease security.
The primary source of algorithmic bias is its training data. An algorithm’s prediction is as good as the data it is fed. A machine learning algorithm is designed to learn from patterns in its source data. Sometimes, such data may be polluted due to record-keeping flaws, biased community inputs and historical trends. Other sources of bias include insufficient data, correlation without causation and a lack of diversity in the database. The algorithm is encouraged to replicate existing biases and a vicious circle is created.
It is worth remembering that algorithms are premeditated to differentiate between people, images and documents. Bias can lead algorithms to make unfair decisions by reinforcing systemic discrimination. For example, a predictive policing algorithm used for foretelling future crimes may disproportionately target poor persons. Similarly, an algorithm used to make a hiring call may favour an upper-caste Hindu man over an equally qualified woman.
The extant law in India is glaringly inadequate. Our framework of constitutional and administrative law is not geared towards assessing decisions made by non-human actors. Further, India has not yet passed a data protection law. The draft Personal Data Protection Bill, 2018, proposed by the Srikrishna Committee has provided the rights to confirmation and access, sans the right to receive explanations about algorithmic decisions. The existing SPDI rules issued under the IT Act, 2000 do not cover algorithmic bias.
No hay comentarios:
Publicar un comentario