top of page
  • Writer's pictureElin Hauge

Bias - a question of human stupidity, not the diversity of the tech team

Updated: Jul 18, 2022

While businesses and societies embrace artificial intelligence(AI) as a tool to achieve a better, more prosperous, and more sustainable future, the problem of bias tags along like the annoying younger sibling who destroys the party but cannot be dismissed. Oh what a bliss if we could only apply the intelligent algorithms, reap the benefits, and leave it to the data science team to solve the bias problem.

Credit: PerG.DArt

In 2018, Amazon deployed an AI model to help them to recruit the best candidates, based on 10 years of historic knowledge on talent development and employee performance within Amazon. A perfectly reasonable intention, shared by many a manager out there. What they ended up with, was an AI model that systematically discriminated against female candidates, to such an extent that they had to scrap the entire initiative.


A while back I heard a Norwegian (female) bureaucrat on the radio, talking about gender equality in STEM jobs. She argued that the failure of the Amazon AI for hiring of talent was a great example of why the world needs more women in data science in specific, and in tech in general. That sounds plausible, and who's against a better gender balance in any occupation, let alone science and technology?

The thing is, her reasoning was wrong, and she's not alone in this misunderstanding of how artificial intelligence works and how the problem of bias appears. Here's what actually happened.

THE AMAZON RECRUITMENT AI SCANDAL

Actually, Amazon started working on an intelligent automation tool for recruitment back in 2014. Keep in mind that Amazon has extensive experience within machine learning, given their e-commerce and warehousing dominance. The ambition was to create a recruitment tool that would process a large volume of resumes and spit out the top candidates to be hired. I am pretty sure that many a HR manager would covet such a tool. A while into the project, they realized that their tool was highly gender-biased. Investigating the training data, they found that the historical data on which the tool was trained contained mostly male applicants. To solve this, they tried to remove all words that were not gender-neutral. But hey, artificial intelligence is smarter than that; the algorithms still found patterns of data with implicit gender references, e.g. female only colleges. They did not manage to consistently solve the issue, and closed down the initiative. About a year later, they tried again, with another team of data scientists and developers in a different country. This time they came prepared for the gender bias problem. Or so they thought. Even without obvious gender information in the training data, the tool learned from the data to favour candidates describing themselves in masculine terms. In addition, the efforts to mask gender had caused new problems, resulting in the tool recommending unqualified candidates for all kinds of jobs. Yet again the initiative was closed down.

AN IMPORTANT CASE FOR LEARNING

It makes no sense ridiculing Amazon for neither their efforts nor the outcomes. In fact, they worked hard to solve a problem that is way more common that you may think, namely the consequences of human behaviour and human decision-making over years, solidly ingrained in the company's data. When they did not manage to solve the bias to a sufficient extent, they did the only responsible thing - they scrapped it.


So let me revert to the claim from the Norwegian (female) bureaucrat:

  1. No, more women in the tech team would not have changed the outcome. In fact, whatever the level of diversity in their tech team was, they did in fact close down the initiative because of bias issues - twice.

  2. No, more women in the tech team would not have made them realize the issues sooner. The process of developing and training machine learning models does not depend on the gender or colour of the people doing the job, only on their skills. We can safely assume that Amazon has highly skilled tech and data science resources.

  3. No, more women in the tech team would not have magically changed the training data. The training data were what they were, namely 10 years of documentation of human analogue decision making (aka "good old" hiring decisions).

The Amazon case is, however, a brilliant example of how human prejudice, human preferences, and human everyday habits become ingrained in the enormous amounts of data this world produces every day. I am talking about you and me, our everyday behaviour and everyday decisions. When aggregated and fed to the algorithms, it's like putting an amplifier on human stupidity.


"The Amazon case is however a brilliant example of how human prejudice, human preferences, and human everyday habits become ingrained in the enormous amounts of data this world produces every day"

In the Amazon case, we can assume that there were at least two human mechanisms influencing the data; 1) the fact that the tech industry was heavily male dominated in the relevant period, and hence female candidates were by definition significantly underrepresented, and 2) leaders tend to prefer hiring candidates that are similar to them, and hence there has probably been an unconscious human bias towards recruiting "more of the same". By stating this, I also reveal my personal bias; I have worked for many years in the Norwegian tech industry where the phrase "we didn't find any relevant female candidates" was omnipresent.

To really solve the problems of unwanted bias*, we need to start with our human selves. We need to look into the mirror and question our own human biases. If we keep asking for what we always had, we will get what we always got. Don't blame the machine.

THE PROBLEM OF HUMAN BIAS IS CLOSER THAN YOU THINK

If the Amazon example feels too far away, I'll give you a local Oslo-based example. A couple of months ago I came across a job posting from one of the larger head-hunter firms in Oslo. The role was investment director for a Greentech venture capital fund. In the first paragraph, it was explicitly stated that candidates should either have previous venture capital experience or background from "tier one" consultancies like McKinsey or BCG. That sounds reasonable, right? Well, purely statistically, that means a preference for white, young men with background from McKinsey, simply because the venture capital industry also has a historical preference for white men with background from…you guessed it, "tier one" consultancies like McKinsey. Diversity thrown through the window even before the first candidates file their applications. No machines involved.

Do you need more examples to be convinced of the problem of human stupidity in machine bias? Here's one more; a female top executive with an impressive track-record in a Nordic listed company was recently profiled in media as a rising star. The (female) board chair told her to stop sticking her head out, and that the only reason she got attention from media was because she is a female with skin tone from a different continent. No machines involved. **

I highly support the need for gender balance, and diversity in general, in STEM. However, the failure of the Amazon case was not due to gender imbalance in the tech team. It's about highly analogue human behaviour, cemented into data through years of "business as usual", and then fed as training data to the algorithms to learn from. Just like children, algorithms are not "born" racist, bigots or sexist, they learn through modelling the behaviour of its surroundings, aka data sets.


"The future is not human less, it is human more."

WE CAN DO BETTER

When developing artificial intelligence, we humans find ourselves standing in front of a huge mirror. The data we give to the algorithms for training represent days, months, years, perhaps even decades of human behaviour in all its inglorious reality. To reduce bias in machines, we need to become better versions of humans. The future is not human less, it is human more.

For more information or for booking of keynotes, please to contact me directly or via Bob Strange "The Closer"




PS. The coming AI Act from the EU Commission will force the recruitment industry to fundamentally rethink the use of AI.

(*) Any data set will always contain bias. Bias is a preference, or in statistical terms, a systematic deviation between the mean of the available data set and the mean of the total amount of data. The question is what kind of and how much bias we can accept in a given context.

(**) Detailes of the story are not provided in order to protect the individual.


Recent Posts

See All
bottom of page