Glasgow’s Wonka Fiasco is a Scary Case of AI Abuse
If someone asks you to think of evil technology or technology 'gone rogue', what do you think of? Most often, our minds conjure up images of cyborg armies, like something out of an episode of Doctor Who. However, facing increasing vilification are algorithms, arguably more so in 2020 than ever before.
Algorithms have been subject to a lot of bad press this year. Most recently, UK-based students were the victims of an algorithm fiasco which was the final nail in the coffin of their disrupted academic year. As with many pupils all over the world, UK students were unable to attend school and ultimately sit their A-Level exams due to the COVID-19 pandemic. In turn, it was decided that their grades would be determined by an algorithm.
However, when the fateful results day arrived, it quickly emerged that the algorithm favoured private school pupils, many of whom achieved higher grades, while significantly downgrading those from poorer, public school backgrounds. Quickly, the news and social media filled up with examples of completely nonsensical grades, such as straight-A students being awarded Bs and Cs for the first time in their secondary school career.
Sadly, it's not just students that have experienced the ill effects of an algorithm. The discriminatory nature of algorithms and their racial bias was realised globally amid the heightened awareness of racial discrimination in police forces. From recognition technologies to racial algorithmic bias in healthcare, algorithms have been responsible for perpetuating the injustice that BAME groups face.
These algorithms have been delegated enormous responsibility and power over people's lives and, for some, the outcome has been catastrophic. Therefore, it comes as no surprise that the general public and press are becoming increasingly wary of how much control algorithms have over our lives. What's also unhelpful for the algorithm reputation is the general distrust of technology, a topic which perhaps has been somewhat exacerbated by the publicity surrounding the Big Tech antitrust hearing.
However, at the risk of making a sweeping statement: algorithms can only be as 'evil' as their creators. Humans may write the algorithm with their best intentions, but humans are inherently biased. Of course, algorithms take their code literally, which results in its own inherent bias.
While the majority of algorithms are harmless and often go completely unnoticed by the general public, putting their lives in the hands of one that isn't absolutely perfect is a risk that governments and authorities cannot afford to take. Given that a degree of bias is inevitable, heavy reliance on an algorithm in life-changing situations is just ludicrous. BAME groups, students, and anybody for that matter should not have to pay a price for algorithmic bias, yet they have, time and time again.
Therefore, it's no wonder that algorithms have left a bitter aftertaste. Undoubtedly, it will take a while to rebuild that trust, but in the meantime, we need to work on some changes. We're just not ready to relinquish human intervention to such a degree; whether that means doing sample studies of new algorithms or not using one at all, businesses, governing bodies, and authorities must find a suitable way to make algorithms work for them and the people they serve. At the end of the day, algorithms are integral to modern-day living, so let's ensure we make them our friend, not our enemy.