Bias in AI: Don’t make algorithms the scapegoat for humanity’s ignorance and discrimination

Isaac Tham
11 min readMar 18, 2021

Coded Bias is a 2020 documentary directed by Shalini Kantayya exploring how the machine learning algorithms of today perpetuate bias and the undesirable consequences that this brings. The film’s thesis is that artificial intelligence fosters an illusion of objectivity in our society. This objectivity is purportedly illusory, because of human-generated data, algorithmic bias, technochavinism, algorithmic determinism, and information monopolies. Blindly adopting AI into our most important decision-making processes can unfairly reinforce inequalities, especially among gender, race and class. However, I find the film problematic in how it casts algorithms as the scapegoat for humans’ bad intentions and ignorance. This is an unfair characterization of technological breakthroughs that promise to bring much good into society, and I fear that this would cause a reflexive revulsion to even hugely-beneficial technologies, rather than an examination of the flawed social processes that allow bad technology to propagate.

In this essay, I attempt to provide a balanced view to this debate, with the sincere hope of sparking a reasoned discussion from people both in technology and the humanities. As a computer science enthusiast also keenly interested in societal issues, it is fascinating to have been led into this debate from both sides — in my deep learning class, Professors Konrad Kording and Lyle Ungar share many articles about AI can be biased, and from the other side — my Sociology of Pop Culture class by Professor David Grazian, which speaks about how digital technology can exacerbate inequalities.

(Note: To be clear, Coded Bias delves into two main issues concerning technology: algorithmic bias, and privacy concerns regarding surveillance. While I have thoughts on the latter, this essay will discuss the former. There are other issues of AI, such as automating away millions of jobs and exacerbating economic inequality, that I believe are the true greatest problems of AI, but this out of the scope of this discussion.)

The documentary validly identifies the biases present in data that machine learning algorithms are trained with that lead to algorithmic bias. Delving deeper into this, I can identify two distinct sources of bias: bias in labelling, and bias due to lack of diversity of training data.

Let’s look at bias in labelling. Training data consists of predictor variables and labels, with the labels ranging from number of police arrests in a neighborhood, whether a loan was approved, or what object a picture is portraying. The labelling process has been done by humans, and this forms the gold standard that the algorithm uses in its attempt to learn about the underlying decision-making process. Hence, one can naturally see how embedded in the labelled training data are all the associated biases and inequalities of human society. An African-American dominated neighborhood may have more arrests because the police were discriminatory and patrolled these areas more. Fewer women may have been previously hired for a job due to a sexist HR manager. Certainly, the act of labelling or categorizing is an act of power, power that has, at worst, been used to reinforce discrimination and oppression of the powerless and minorities, and at best, neglects their alternative perspectives.

Bias can also arise from a lack of diversity in training data. This is the main source of the bias causing the problems Joy Buolamwini was most frustrated with in the film. Facial datasets that big-tech companies use to train facial recognition models are dominated by white male faces, with fewer faces of minorities such as African-Americans. Since the accuracy of a model on a particular group is determined by how much training data is available from that group, this mechanically leads to the models learning about minorities less well, leading to lower accuracy. This was starkly portrayed in the film by how Joy’s darker-tone face was not recognized by the software while Joy wearing a white mask was. Another mechanism is called bias amplification — where the disparity in a model’s predictions between different classes is amplified compared to the ground truth.

Such imbalances in training data could be due to power asymmetries — perhaps more white males have access to the internet to upload pictures of themselves, or the white male-dominated research group just chose to look at websites more frequently used by white males to find pictures to include in the dataset. But many times, the imbalances are simply due to numbers — there are fewer people in minority ethnicities than majority.

From this, we can see that humans infuse bias into algorithms, instead of algorithms being biased. Nearly all problems with bias from AI stem from the bias in the human-generated training data and the ignorance of its effects. Algorithms can only be as good as the poor training data they are given — they dispassionately take inputs and return outputs, so they cannot be blamed if the output contains bias. The only area I can pin the blame on algorithms is through bias amplification in exacerbating disparities — otherwise algorithms purely reflect the biases that humans (knowingly or not) feed into them.

Cathy O’Neil, the author of Weapons of Math Destruction, says that ‘artificial intelligence is just math’. That is true — the neural networks that underlie facial recognition, the random forests that make loan application decisions, are all but cleverly-designed matrix algebra and functions that extract underlying patterns from the data to minimize the prediction error (or what the deep learning community calls a loss function). Indeed, it would be absurd to label math as ‘biased’. AI is just the tool for humans to make decisions, and the problems with AI applications reflect problems with the human design process of the AI algorithm rather than the technology itself.

Bias in algorithmic outputs can be solved when their creators make informed choices about the design of the algorithm. Often, this bias exists due to the ignorance of the researcher when he includes sensitive attributes such as race or gender into a prediction model. The model thus learns to use race and gender to judge a person’s creditworthiness or chance of recidivism. This can be solved with careful consideration of the variables that an algorithm makes decisions with — creators can remove sensitive attributes (race, gender etc.) as well as proxy attributes (such as zipcode) that are correlated with the sensitive attributes.

Bias can be reduced too by choosing better objectives that the algorithm optimizes for. In the past few years, as bias has become a widely-researched issue in machine learning research, numerous metrics of algorithmic fairness have been developed, such as independence, separation and sufficiency. Adding these as a penalty term to the objective functions will allow the algorithm to focus on balancing these desirable characteristics. Other techniques have been developed to counteract the bias amplification mechanisms that cause minorities to suffer lower accuracy on image classifiers. Sure enough, in the documentary Joy brought up how she contacted IBM about the accuracy bias in their facial recognition technology, which led to the company tweaking the algorithm and releasing a new version which had equal accuracies on all genders and skin tones. This exemplifies the fact that the priorities and choices of the algorithm’s human creators is the key determinant of how biased an algorithm’s output is.

By coining and distributing terms such as ‘algorithmic bias’ and ‘algorithmic determinism’, this documentary paints artificial intelligence as adversarial and evil perpetuators of bias. This feeds on the popular narrative in pop culture of portraying AI as sentient beings that turn against humans, such as in the Terminator. The many scenes in the film with a blackened screen and a monotonous AI voice speaking morally questionable content, all cast AI as sinister, untoward and something that should be feared and rejected. The film then goes on to perpetuate more commonly-held myths of artificial intelligence.

One key premise in the film is that artificial intelligence is a ‘black box’ that ‘even the creators do not understand’. As someone who has delved deeply into machine learning and deep learning in college, I know for a fact that is simply not true, and a disservice to the astounding work that thousands of machine learning researchers have put in to bring technology to the level it is today. Researchers have developed techniques in machine learning algorithms, such as feature importance and partial dependence, to find out what factors are driving a prediction, and how an incremental change in one variable will affect the resulting prediction. For the convolutional neural networks that underlie today’s state-of-the-art facial recognition technology, techniques exist that allow users to visualize which part of the picture was most responsible for the model’s prediction, and to understand the intermediate features learnt by the network, by visualizing the individual convolutional features. (Check out this link for an in-depth explanation). With these features, we can actually understand how artificial intelligence arrives at decisions — objectively, methodically and truthfully.

In fact, is this not how we want the key decisions affecting us to be made? Are human decisions, on the other hand, as objective and perfectly understood as we think they are? The term ‘algorithmic determinism’ somehow fallaciously implies that human-made decisions never unfairly and arbitrarily determine the course of someone’s life. All AI may be bad, but all humans could be even worse. When O’Neil wants to push back against ‘algorithms that affect (our) future and opportunities without offering any explanation’, I would respond — what would you prefer? To be evaluated by subjective people whose decision-making process you can’t peer into, who can claim that they are rejecting you for your grades yet actually do it because of their discriminatory attitudes, and whose decisions are volatile and subject to their emotions or even how hungry they are? (For example, much research has been done about judges who convict more criminals when they are hungrier). I’m sure many would have experienced the frustration when you knew that you weren’t selected to be on a team because the coach doesn’t like you, but he points to some made-up ‘inadequacies’ of yours to attempt to justify his decisions. On the other hand, algorithms can’t lie, they are consistent and can be deconstructed to inspect their decision process.

Another statement thrown around by opponents of AI is that AI is ‘inaccurate’, which is also blatantly untrue. Machine learning algorithms have outperformed humans in image recognition tasks (such as ImageNet and CIFAR-10) since 2015, with the gap of outperformance increasing by the year. With AI technology and research improving over time, these criticisms against AI, commonly shaped by past memories of media reporting of obvious mistakes that machine learning algorithms make, may be true about AI in its previous, worse form, but are very likely to be outdated.

Convolutional neural networks have surpassed human accuracy in image classification from as early as 2015. Image credits: outsystems.com

I fear that spreading exaggerating misinformation about the inaccuracies or uninterpretability of AI can entrench negative mindsets in the populace that are difficult to dislodge, permanently eroding the public’s trust in artificial intelligence even after rapid improvements in AI eventually solve most of these concerns. This is especially the case because advances in AI are theoretical and difficult for many to grasp and find out the truth for themselves, hence making people easily influenced by whoever is pressing their agenda the loudest, by stirring up people’s deepest fears, even when these fears may eventually become unfounded. It is frustrating to watch the people in the film instinctively recoil against facial recognition technology installed in their apartments, citing reasons like their ‘inaccuracy’ and ‘lack of transparency’. When faced with such fears and uncertainty, knee-jerk reactions such as cities banning all facial recognition technologies seem like victories.

What I’ve said thus far is not incompatible with many people’s recommendations for more regulation and oversight of AI, such as through the Algorithmic Accountability Act and the Commercial Facial Recognition Privacy Act. The algorithms that significantly affect our lives should be open and transparent for everyone to access. That will ensure that companies work hard to ensure fairness and lack of bias in their algorithms, and when they fall short, people can suggest improvements and canvas for changes. In fact, this is already happening in the academic sphere — machine learning research is one of the most collaborative, transparent and open-source disciplines, with researchers posting their codes of their state-of-the-art algorithms online, allowing the community to iteratively improve on them. This is the main reason why the pace of AI research has been so rapid. The problem is that commercial companies keep their algorithms shrouded in secrecy due to intellectual property reasons, and these profit-making firms are increasingly affecting every aspect of our lives — such as Amazon’s facial recognition software being trialed with law enforcement. Hence, we can see that machine learning is not inherently opaque and inaccessible, but instead, capitalistic intentions are the root cause of the perceived lack of transparency about the technologies that most affect us.

The unconsidered pushback against AI threatens to stymie all the tremendous progress that AI has brought to us today. AI allows trends and patterns that are imperceptible to people, but are useful for predictive ability, to be picked up to improve decision-making capabilities. In healthcare, image recognition of tumors, as well as algorithms combing through patients’ diagnostic test results, can offer doctors fine-grained and more accurate diagnoses of patients’ conditions, as Kai-Fu Lee described in his book AI Superpowers when he found out when he turned to machine learning research to discover his cancer prognosis. As mentioned earlier, AI is consistent, truthful, repeatable and controllable. Last but not least, Image-recognition AI software can be scaled, making our production processes that require manpower supervision much more efficient — in Japan, pictures of tuna cross-sections are used to assess tuna quality, and in Africa, deep learning is deployed to help farmers detect and fight crop diseases that threaten the world’s food stability.

At the same time, I definitely do not propose that we surrender all decisions to AI. While technological advancements can improve AI decision-making in many fields that require objectivity, our society is not always objective, and objectivity is sometimes not the aim. AI has a weakness that it will never solve — they cannot replicate humans in qualitative, subjective tasks. Some characteristics of a person cannot be captured quantitatively, such as aesthetic beauty, compassion, love and passion, and we should never let AI attempt to evaluate these traits. Hence, many decisions that hinge on a subjective assessment of character traits, such as parole, or college admission essays, should definitely stay within the purview of humans.

In fact, the best vision of the future is when AI and humans work together, each serving different yet complementary roles in decision-tasking tasks. Machine learning takes care of the quantifiable aspects of decisions and provides recommendations, that a human decision-maker combines with other subjective factors to make a final decision. Whenever somebody feels unjustifiably rejected by an algorithm-driven decision, he can appeal and have a human evaluator look into the decision-making factors and offer a subjective view. Hence, I agree with the rejection of technochauvinism, or the belief that technological solutions are always the best, but I would not agree with the rejection of technology.

In conclusion, the increasing focus on bias in algorithms is something to be celebrated, for it is part of a beneficial movement which calls into question the agents of power in today’s increasingly technological society and how this power is wielded. But this focus should remain centered on changing the mindsets of discriminatory or ignorant organizations and institutions so that they design technologies which are fairer and more inclusive, and not on changing the mindsets of the population to reject all artificial intelligence.

--

--

Isaac Tham

economics enthusiast, data science devotee, f1 fanatic, son of God