Back to Posts

Fairness of Algorithm: A concern for Artificial Intelligence

You feed the neural network with huge amount of data, and it finds complex relationships, understands logic, patterns of the data etc. to give you final solution. We will accept that solution suggested by Artificial Intelligence (AI), but we don’t understand how the entire links worked. This explain-ability problem of AI is becoming a concern now a days. The EU GDPR introduces the right of explanation by individuals when such automated decision-making takes place. There can be a possibility of bias by algorithms. This Black box problem is the hurdle that we need to overcome before we integrate AI completely into our system. There will be legal implications in the decision taken by AI if we leave it unresolved in terms of explain-ability.

While an organization implements AI system, it is important to control the system (making sure that AI is working in fair way, the outcome is not biased, the system is not deviated from norm) so that things are running in proper and fair way. Some of the technology is solving this bias problem. IBM Watson OpenScale gives accurate view of AI system, monitors performance and fine tunes it. The explain-ability feature gets detailed answer why certain decision was taken if the customer or the regulator asks; also, the bias feature automatically detects bias and mitigates it. There have been certain cases where algorithm fairness came into question and issues raised on the decisions taken by AI.

AI recruiting tool at Amazon: In order to make recruitment process faster so that resume screening can be made faster, Amazon used algorithm for faster decision making. The idea was to screen high volume of resumes through AI and it will finalize top 5 based on certain algorithm. However, later it was found that the system was not gender neutral. The system taught itself, based on past data, that male candidates are preferable. After that the system was reprogrammed. However, there is no guarantee that some other relationships amongst data can be interpreted by AI in different way to create some sort of bias in the decision making. The revised system was still picking up gender influenced words like executed”, “captured”.

COMPAS: It is an algorithm that decides what is the probability of a criminal re-offending. Based on huge amount of past data of the individuals, it decides the probability of re-offending. However, it was later found out that it was racially biased. Given the fact that algorithm can be propriety, it is difficult to judge the fairness of algorithm.

PredPol: This is the algorithm that is used to predict when and where a crime will take place. It was created by UCLA scientists working with the Los Angeles Police Department. However, it is argued that the program is racially biased. The program wrongly predicted with bias a certain minority neighborhood repeatedly as a scene of crime. However, true crime rate in that area was not matching with what PredPol predicted. The algorithm was unfairly targeting certain neighborhood. Because of the fact tat algorithm predict the crime based on the cases reported by police, it creates a loop or a sort of spiral where past data creates and perpetuates the bias.

 

Machine can’t think, so the bias problem is not created by the machine itself, but by the humans. These biases are perpetuated by the machine based on the past data. For example, in the case of recruitment process at Amazon, Algorithm studied past data of recruitment and rated (based on past data) male applicant as more preferable than female applicant. Any data that has a sort of historical bias can be perpetuated by the algorithm. The same is true for loan application screening or crime area prediction. There are efforts going on to counter the bias problem by working on data more closely to train. AI can be a very good companion for humans provided we train them with diverse and fair data that doesn’t have historical biases.

Share this post

Back to Posts