This last weekend a number of Twitter users ran an experiment with images containing the faces of white and black people. When these images were uploaded onto Twitter, the Twitter algorithms had zoomed in on the white faces for previews. These experiments were controlled for size, background colour and any other variable that could affect the image cropping algorithm.
The Beginning
Colin Madland; a university manager based in Vancouver notices that the head of his colleague was vanishing anytime the colleague was using a virtual background. He then took to Twitter in an attempt to troubleshoot the problem, however, he noticed that the Twitter image preview always shows his face despite switching the order of the images.
This led to experiments by many people on Twitter to see if there is racial bias in Twitter’s image cropping algorithm. The experiments included Barack Obama, Senator Mitch McConnell and even fictional character Carl and Lenny – the results of the experiments were that darker people, images and characters were cropped out of the preview and lighter people, images and characters were included in the preview irrespective of their positioning in the image.
More than just an experiment
This experiment is just the latest story in the racially biased algorithms saga. What is especially concerning is that fact the algorithms are being used at an increasing rate to make decisions that affect our lives. Here are some examples of these decisions:
- Education – This year an algorithm was used to determine the grades of A-Level students in England. 40% of students came out with lower than predicted grades; the majority of which came from underprivileged backgrounds.
- Immigration – The Home Office discontinued an algorithm that they used to decisions of visa applications after allegations that it contained “entrenched racism”. People from red-flagged companies were almost certainly going to have their application rejected.
- Policing – Officers in the UK raised concern about using biased AI tools after a study found that such software could amplify prejudices.
Technology isn’t biased, the inputs are
One thing that I want to drive home is that technology isn’t biased. Your computer wouldn’t open an application without some sort of input. An automated dispenser wouldn’t dispense soap unless its taught how to recognise a hand. Face recognition technology wouldn’t deem a black man as a threat unless its inputs has some sort of bias embedded into the algorithm.
You may be asking “how can we get rid of these biases?”
Well, the good news is that removing biases in programmes and algorithms is actually pretty easy… the bad news? The bad news is that is not as easy to remove biases from people. People write the algorithms that we are becoming more reliant on to make decisions and unfortunately, biased algorithms are easier to fix than biased people.
A quick glance into Twitter’s diversity and inclusion report in 2019 will show you why I am not surprised by these events. If your leadership and technical workforce lack diversity, there will be biases in the work produced; whether that will be gender, sex or racial. Twitter’s latest cropping algorithm serves as another reminder as to why the tech sector should be more diverse.